Tippayakul, Chanatip
The main objective of this research is to develop a practical fuel management system for the Pennsylvania State University Breazeale research reactor (PSBR) based on several advanced Monte Carlo coupled depletion methodologies. Primarily, this research involved two major activities: model and method developments and analyses and validations of the developed models and methods. The starting point of this research was the utilization of the earlier developed fuel management tool, TRIGSIM, to create the Monte Carlo model of core loading 51 (end of the core loading). It was found when comparing the normalized power results of the Monte Carlo model to those of the current fuel management system (using HELIOS/ADMARC-H) that they agreed reasonably well (within 2%--3% differences on average). Moreover, the reactivity of some fuel elements was calculated by the Monte Carlo model and it was compared with measured data. It was also found that the fuel element reactivity results of the Monte Carlo model were in good agreement with the measured data. However, the subsequent task of analyzing the conversion from the core loading 51 to the core loading 52 using TRIGSIM showed quite significant difference of each control rod worth between the Monte Carlo model and the current methodology model. The differences were mainly caused by inconsistent absorber atomic number densities between the two models. Hence, the model of the first operating core (core loading 2) was revised in light of new information about the absorber atomic densities to validate the Monte Carlo model with the measured data. With the revised Monte Carlo model, the results agreed better to the measured data. Although TRIGSIM showed good modeling and capabilities, the accuracy of TRIGSIM could be further improved by adopting more advanced algorithms. Therefore, TRIGSIM was planned to be upgraded. The first task of upgrading TRIGSIM involved the improvement of the temperature modeling capability. The new TRIGSIM was
MCOR - Monte Carlo depletion code for reference LWR calculations
Energy Technology Data Exchange (ETDEWEB)
Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)
2011-04-15
Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally
MCOR - Monte Carlo depletion code for reference LWR calculations
International Nuclear Information System (INIS)
Puente Espel, Federico; Tippayakul, Chanatip; Ivanov, Kostadin; Misu, Stefan
2011-01-01
Research highlights: → Introduction of a reference Monte Carlo based depletion code with extended capabilities. → Verification and validation results for MCOR. → Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations
Monte Carlo simulation in UWB1 depletion code
International Nuclear Information System (INIS)
Lovecky, M.; Prehradny, J.; Jirickova, J.; Skoda, R.
2015-01-01
U W B 1 depletion code is being developed as a fast computational tool for the study of burnable absorbers in the University of West Bohemia in Pilsen, Czech Republic. In order to achieve higher precision, the newly developed code was extended by adding a Monte Carlo solver. Research of fuel depletion aims at development and introduction of advanced types of burnable absorbers in nuclear fuel. Burnable absorbers (BA) allow the compensation of the initial reactivity excess of nuclear fuel and result in an increase of fuel cycles lengths with higher enriched fuels. The paper describes the depletion calculations of VVER nuclear fuel doped with rare earth oxides as burnable absorber based on performed depletion calculations, rare earth oxides are divided into two equally numerous groups, suitable burnable absorbers and poisoning absorbers. According to residual poisoning and BA reactivity worth, rare earth oxides marked as suitable burnable absorbers are Nd, Sm, Eu, Gd, Dy, Ho and Er, while poisoning absorbers include Sc, La, Lu, Y, Ce, Pr and Tb. The presentation slides have been added to the article
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON
International Nuclear Information System (INIS)
Goluoglu, Sedat; Bekar, Kursat B.; Wiarda, Dorothea
2012-01-01
The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.
Monte Carlo Depletion Analysis of a PWR Integral Fuel Burnable Absorber by MCNAP
Shim, H. J.; Jang, C. S.; Kim, C. H.
The MCNAP is a personal computer-based continuous energy Monte Carlo (MC) neutronics analysis program written on C++ language. For the purpose of examining its qualification, a comparison of the depletion analysis of three integral burnable fuel assemblies of the pressurized water reactor (PWR) by the MCNAP and deterministic fuel assembly (FA) design vendor codes is presented. It is demonstrated that the continuous energy MC calculation by the MCNAP can provide a very accurate neutronics analysis method for the burnable absorber FA's. It is also demonstrated that the parallel MC computation by adoption of multiple PC's enables one to complete the lifetime depletion analysis of the FA's within the order of hours instead of order of days otherwise.
Fensin, Michael Lorne
and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.
Directory of Open Access Journals (Sweden)
Jingang Liang
2016-06-01
Full Text Available Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC codes in accomplishing pin-wise three-dimensional (3D full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
The development of depletion program coupled with Monte Carlo computer code
International Nuclear Information System (INIS)
Nguyen Kien Cuong; Huynh Ton Nghiem; Vuong Huu Tan
2015-01-01
The paper presents the development of depletion code for light water reactor coupled with MCNP5 code called the MCDL code (Monte Carlo Depletion for Light Water Reactor). The first order differential depletion system equations of 21 actinide isotopes and 50 fission product isotopes are solved by the Radau IIA Implicit Runge Kutta (IRK) method after receiving neutron flux, reaction rates in one group energy and multiplication factors for fuel pin, fuel assembly or whole reactor core from the calculation results of the MCNP5 code. The calculation for beryllium poisoning and cooling time is also integrated in the code. To verify and validate the MCDL code, high enriched uranium (HEU) and low enriched uranium (LEU) fuel assemblies VVR-M2 types and 89 fresh HEU fuel assemblies, 92 LEU fresh fuel assemblies cores of the Dalat Nuclear Research Reactor (DNRR) have been investigated and compared with the results calculated by the SRAC code and the MCNP R EBUS linkage system code. The results show good agreement between calculated data of the MCDL code and reference codes. (author)
ORPHEE research reactor: 3D core depletion calculation using Monte-Carlo code TRIPOLI-4®
Damian, F.; Brun, E.
2014-06-01
ORPHEE is a research reactor located at CEA Saclay. It aims at producing neutron beams for experiments. This is a pool-type reactor (heavy water), and the core is cooled by light water. Its thermal power is 14 MW. ORPHEE core is 90 cm height and has a cross section of 27x27 cm2. It is loaded with eight fuel assemblies characterized by a various number of fuel plates. The fuel plate is composed of aluminium and High Enriched Uranium (HEU). It is a once through core with a fuel cycle length of approximately 100 Equivalent Full Power Days (EFPD) and with a maximum burnup of 40%. Various analyses under progress at CEA concern the determination of the core neutronic parameters during irradiation. Taking into consideration the geometrical complexity of the core and the quasi absence of thermal feedback for nominal operation, the 3D core depletion calculations are performed using the Monte-Carlo code TRIPOLI-4® [1,2,3]. A preliminary validation of the depletion calculation was performed on a 2D core configuration by comparison with the deterministic transport code APOLLO2 [4]. The analysis showed the reliability of TRIPOLI-4® to calculate a complex core configuration using a large number of depleting regions with a high level of confidence.
Adding trend data to Depletion-Based Stock Reduction Analysis
National Oceanic and Atmospheric Administration, Department of Commerce — A Bayesian model of Depletion-Based Stock Reduction Analysis (DB-SRA), informed by a time series of abundance indexes, was developed, using the Sampling Importance...
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Accelerated GPU based SPECT Monte Carlo simulations
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Monte Carlo Based Framework to Support HAZOP Study
DEFF Research Database (Denmark)
Danko, Matej; Frutiger, Jerome; Jelemenský, Ľudovít
2017-01-01
This study combines Monte Carlo based process simulation features with classical hazard identification techniques for consequences of deviations from normal operating conditions investigation and process safety examination. A Monte Carlo based method has been used to sample and evaluate different...... deviations in process parameters simultaneously, thereby bringing an improvement to the Hazard and Operability study (HAZOP), which normally considers only one at a time deviation in process parameters. Furthermore, Monte Carlo filtering was then used to identify operability and hazard issues including...
Research on perturbation based Monte Carlo reactor criticality search
International Nuclear Information System (INIS)
Li Zeguang; Wang Kan; Li Yangliu; Deng Jingkang
2013-01-01
Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Traditional Monte Carlo criticality search method is suffered from large amount of individual criticality runs and uncertainty and fluctuation of Monte Carlo results. A new Monte Carlo criticality search method based on perturbation calculation is put forward in this paper to overcome the disadvantages of traditional method. By using only one criticality run to get initial k eff and differential coefficients of concerned parameter, the polynomial estimator of k eff changing function is solved to get the critical value of concerned parameter. The feasibility of this method was tested. The results show that the accuracy and efficiency of perturbation based criticality search method are quite inspiring and the method overcomes the disadvantages of traditional one. (authors)
DEFF Research Database (Denmark)
Stenbæk, Dea Siggaard; Einarsdottir, H S; Goregliad-Fjaellingsdal, T
2016-01-01
Acute Tryptophan Depletion (ATD) is a dietary method used to modulate central 5-HT to study the effects of temporarily reduced 5-HT synthesis. The aim of this study is to evaluate a novel method of ATD using a gelatin-based collagen peptide (CP) mixture. We administered CP-Trp or CP+Trp mixtures...
Continuous energy Monte Carlo method based lattice homogeinzation
International Nuclear Information System (INIS)
Li Mancang; Yao Dong; Wang Kan
2014-01-01
Based on the Monte Carlo code MCNP, the continuous energy Monte Carlo multi-group constants generation code MCMC has been developed. The track length scheme has been used as the foundation of cross section generation. The scattering matrix and Legendre components require special techniques, and the scattering event method has been proposed to solve this problem. Three methods have been developed to calculate the diffusion coefficients for diffusion reactor core codes and the Legendre method has been applied in MCMC. To the satisfaction of the equivalence theory, the general equivalence theory (GET) and the superhomogenization method (SPH) have been applied to the Monte Carlo method based group constants. The super equivalence method (SPE) has been proposed to improve the equivalence. GET, SPH and SPE have been implemented into MCMC. The numerical results showed that generating the homogenization multi-group constants via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data library can be used for a wide range of applications due to the versatility. The MCMC scheme can be seen as a potential alternative to the widely used deterministic lattice codes. (authors)
Huang, Lingli; Lin, Zhoumeng; Zhou, Xuan; Zhu, Meiling; Gehring, Ronette; Riviere, Jim E; Yuan, Zonghui
2015-01-01
Physiologically based pharmacokinetic (PBPK) models are powerful tools to predict tissue distribution and depletion of veterinary drugs in food animals. However, most models only simulate the pharmacokinetics of the parent drug without considering their metabolites. In this study, a PBPK model was developed to simultaneously describe the depletion in pigs of the food animal antimicrobial agent cyadox (CYA), and its marker residue 1,4-bisdesoxycyadox (BDCYA). The CYA and BDCYA sub-models included blood, liver, kidney, gastrointestinal tract, muscle, fat and other organ compartments. Extent of plasma-protein binding, renal clearance and tissue-plasma partition coefficients of BDCYA were measured experimentally. The model was calibrated with the reported pharmacokinetic and residue depletion data from pigs dosed by oral gavage with CYA for five consecutive days, and then extrapolated to exposure in feed for two months. The model was validated with 14 consecutive day feed administration data. This PBPK model accurately simulated CYA and BDCYA in four edible tissues at 24-120 h after both oral exposure and 2-month feed administration. There was only slight overestimation of CYA in muscle and BDCYA in kidney at earlier time points (6-12 h) when dosed in feed. Monte Carlo analysis revealed excellent agreement between the estimated concentration distributions and observed data. The present model could be used for tissue residue monitoring of CYA and BDCYA in food animals, and provides a foundation for developing PBPK models to predict residue depletion of both parent drugs and their metabolites in food animals.
Research on Monte Carlo application based on Hadoop
Directory of Open Access Journals (Sweden)
Wu Minglei
2018-01-01
Full Text Available Monte Carlo method is also known as random simulation method. The more the number of experiments, the more accurate the results obtained. Therefore, a large number of random simulation is required in order to obtain a higher degree of accuracy, but the traditional stand-alone algorithm has been difficult to meet the needs of a large number of simulation. Hadoop platform is a distributed computing platform built on a large data background and an open source software under Apache. It is easier to write and run applications for processing massive amounts of data as an open source software platform. Therefore, this paper takes π value calculation as an example to realize the Monte Carlo algorithm based on Hadoop platform, and get the exact π value with the advantage of Hadoop platform in distributed processing.
DEVELOPMENT OF A MULTIMODAL MONTE CARLO BASED TREATMENT PLANNING SYSTEM.
Kumada, Hiroaki; Takada, Kenta; Sakurai, Yoshinori; Suzuki, Minoru; Takata, Takushi; Sakurai, Hideyuki; Matsumura, Akira; Sakae, Takeji
2017-10-26
To establish boron neutron capture therapy (BNCT), the University of Tsukuba is developing a treatment device and peripheral devices required in BNCT, such as a treatment planning system. We are developing a new multimodal Monte Carlo based treatment planning system (developing code: Tsukuba Plan). Tsukuba Plan allows for dose estimation in proton therapy, X-ray therapy and heavy ion therapy in addition to BNCT because the system employs PHITS as the Monte Carlo dose calculation engine. Regarding BNCT, several verifications of the system are being carried out for its practical usage. The verification results demonstrate that Tsukuba Plan allows for accurate estimation of thermal neutron flux and gamma-ray dose as fundamental radiations of dosimetry in BNCT. In addition to the practical use of Tsukuba Plan in BNCT, we are investigating its application to other radiation therapies. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Mechanism-based biomarker gene sets for glutathione depletion-related hepatotoxicity in rats
International Nuclear Information System (INIS)
Gao Weihua; Mizukawa, Yumiko; Nakatsu, Noriyuki; Minowa, Yosuke; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro
2010-01-01
Chemical-induced glutathione depletion is thought to be caused by two types of toxicological mechanisms: PHO-type glutathione depletion [glutathione conjugated with chemicals such as phorone (PHO) or diethyl maleate (DEM)], and BSO-type glutathione depletion [i.e., glutathione synthesis inhibited by chemicals such as L-buthionine-sulfoximine (BSO)]. In order to identify mechanism-based biomarker gene sets for glutathione depletion in rat liver, male SD rats were treated with various chemicals including PHO (40, 120 and 400 mg/kg), DEM (80, 240 and 800 mg/kg), BSO (150, 450 and 1500 mg/kg), and bromobenzene (BBZ, 10, 100 and 300 mg/kg). Liver samples were taken 3, 6, 9 and 24 h after administration and examined for hepatic glutathione content, physiological and pathological changes, and gene expression changes using Affymetrix GeneChip Arrays. To identify differentially expressed probe sets in response to glutathione depletion, we focused on the following two courses of events for the two types of mechanisms of glutathione depletion: a) gene expression changes occurring simultaneously in response to glutathione depletion, and b) gene expression changes after glutathione was depleted. The gene expression profiles of the identified probe sets for the two types of glutathione depletion differed markedly at times during and after glutathione depletion, whereas Srxn1 was markedly increased for both types as glutathione was depleted, suggesting that Srxn1 is a key molecule in oxidative stress related to glutathione. The extracted probe sets were refined and verified using various compounds including 13 additional positive or negative compounds, and they established two useful marker sets. One contained three probe sets (Akr7a3, Trib3 and Gstp1) that could detect conjugation-type glutathione depletors any time within 24 h after dosing, and the other contained 14 probe sets that could detect glutathione depletors by any mechanism. These two sets, with appropriate scoring
Efficient sampling algorithms for Monte Carlo based treatment planning
International Nuclear Information System (INIS)
DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.
1998-01-01
Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed
GPU-Monte Carlo based fast IMRT plan optimization
Directory of Open Access Journals (Sweden)
Yongbao Li
2014-03-01
Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z
Markov chain Monte Carlo sampling based terahertz holography image denoising.
Chen, Guanghao; Li, Qi
2015-05-10
Terahertz digital holography has attracted much attention in recent years. This technology combines the strong transmittance of terahertz and the unique features of digital holography. Nonetheless, the low clearness of the images captured has hampered the popularization of this imaging technique. In this paper, we perform a digital image denoising technique on our multiframe superposed images. The noise suppression model is concluded as Bayesian least squares estimation and is solved with Markov chain Monte Carlo (MCMC) sampling. In this algorithm, a weighted mean filter with a Gaussian kernel is first applied to the noisy image, and then by nonlinear contrast transform, the contrast of the image is restored to the former level. By randomly walking on the preprocessed image, the MCMC-based filter keeps collecting samples, assigning them weights by similarity assessment, and constructs multiple sample sequences. Finally, these sequences are used to estimate the value of each pixel. Our algorithm shares some good qualities with nonlocal means filtering and the algorithm based on conditional sampling proposed by Wong et al. [Opt. Express18, 8338 (2010)10.1364/OE.18.008338OPEXFF1094-4087], such as good uniformity, and, moreover, reveals better performance in structure preservation, as shown in numerical comparison using the structural similarity index measurement and the peak signal-to-noise ratio.
The new MCNP6 depletion capability
International Nuclear Information System (INIS)
Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.
2012-01-01
The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)
The New MCNP6 Depletion Capability
International Nuclear Information System (INIS)
Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.
2012-01-01
The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.
Image based Monte Carlo modeling for computational phantom
International Nuclear Information System (INIS)
Cheng, M.; Wang, W.; Zhao, K.; Fan, Y.; Long, P.; Wu, Y.
2013-01-01
Full text of the publication follows. The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verification of the models for Monte Carlo (MC) simulation are very tedious, error-prone and time-consuming. In addition, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling. The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients (Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection. (authors)
Image based Monte Carlo Modeling for Computational Phantom
Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican
2014-06-01
The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.
Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh
2014-06-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.
EXPERIMENTAL ACIDIFICATION CAUSES SOIL BASE-CATION DEPLETION AT THE BEAR BROOK WATERSHED IN MAINE
There is concern that changes in atmospheric deposition, climate, or land use have altered the biogeochemistry of forests causing soil base-cation depletion, particularly Ca. The Bear Brook Watershed in Maine (BBWM) is a paired watershed experiment with one watershed subjected to...
Experimental Acidification Causes Soil Base-Cation Depletion at the Bear Brook Watershed in Maine
Ivan J. Fernandez; Lindsey E. Rustad; Stephen A. Norton; Jeffrey S. Kahl; Bernard J. Cosby
2003-01-01
There is concern that changes in atmospheric deposition, climate, or land use have altered the biogeochemistry of forests causing soil base-cation depletion, particularly Ca. The Bear Brook Watershed in Maine (BBWM) is a paired watershed experiment with one watershed subjected to elevated N and S deposition through bimonthly additions of (NH4)2SO4. Quantitative soil...
Monte Carlo-based simulation of dynamic jaws tomotherapy
International Nuclear Information System (INIS)
Sterpin, E.; Chen, Y.; Chen, Q.; Lu, W.; Mackie, T. R.; Vynckier, S.
2011-01-01
Purpose: Original TomoTherapy systems may involve a trade-off between conformity and treatment speed, the user being limited to three slice widths (1.0, 2.5, and 5.0 cm). This could be overcome by allowing the jaws to define arbitrary fields, including very small slice widths (<1 cm), which are challenging for a beam model. The aim of this work was to incorporate the dynamic jaws feature into a Monte Carlo (MC) model called TomoPen, based on the MC code PENELOPE, previously validated for the original TomoTherapy system. Methods: To keep the general structure of TomoPen and its efficiency, the simulation strategy introduces several techniques: (1) weight modifiers to account for any jaw settings using only the 5 cm phase-space file; (2) a simplified MC based model called FastStatic to compute the modifiers faster than pure MC; (3) actual simulation of dynamic jaws. Weight modifiers computed with both FastStatic and pure MC were compared. Dynamic jaws simulations were compared with the convolution/superposition (C/S) of TomoTherapy in the ''cheese'' phantom for a plan with two targets longitudinally separated by a gap of 3 cm. Optimization was performed in two modes: asymmetric jaws-constant couch speed (''running start stop,'' RSS) and symmetric jaws-variable couch speed (''symmetric running start stop,'' SRSS). Measurements with EDR2 films were also performed for RSS for the formal validation of TomoPen with dynamic jaws. Results: Weight modifiers computed with FastStatic were equivalent to pure MC within statistical uncertainties (0.5% for three standard deviations). Excellent agreement was achieved between TomoPen and C/S for both asymmetric jaw opening/constant couch speed and symmetric jaw opening/variable couch speed, with deviations well within 2%/2 mm. For RSS procedure, agreement between C/S and measurements was within 2%/2 mm for 95% of the points and 3%/3 mm for 98% of the points, where dose is greater than 30% of the prescription dose (gamma analysis
International Nuclear Information System (INIS)
Huffer, E.; Nifenecker, H.
2001-02-01
This document deals with the physical, chemical and radiological properties of the depleted uranium. What is the depleted uranium? Why do the military use depleted uranium and what are the risk for the health? (A.L.B.)
CAD-based Monte Carlo program for integrated simulation of nuclear system SuperMC
International Nuclear Information System (INIS)
Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin
2015-01-01
Highlights: • The new developed CAD-based Monte Carlo program named SuperMC for integrated simulation of nuclear system makes use of hybrid MC-deterministic method and advanced computer technologies. SuperMC is designed to perform transport calculation of various types of particles, depletion and activation calculation including isotope burn-up, material activation and shutdown dose, and multi-physics coupling calculation including thermo-hydraulics, fuel performance and structural mechanics. The bi-directional automatic conversion between general CAD models and physical settings and calculation models can be well performed. Results and process of simulation can be visualized with dynamical 3D dataset and geometry model. Continuous-energy cross section, burnup, activation, irradiation damage and material data etc. are used to support the multi-process simulation. Advanced cloud computing framework makes the computation and storage extremely intensive simulation more attractive just as a network service to support design optimization and assessment. The modular design and generic interface promotes its flexible manipulation and coupling of external solvers. • The new developed and incorporated advanced methods in SuperMC was introduced including hybrid MC-deterministic transport method, particle physical interaction treatment method, multi-physics coupling calculation method, geometry automatic modeling and processing method, intelligent data analysis and visualization method, elastic cloud computing technology and parallel calculation method. • The functions of SuperMC2.1 integrating automatic modeling, neutron and photon transport calculation, results and process visualization was introduced. It has been validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. - Abstract: Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as a routine
Mary Beth Adams; James A. Burger
2010-01-01
Soil acidification and base cation depletion are concerns for those wishing to manage central Appalachian hardwood forests sustainably. In this research, 2 experiments were established in 1996 and 1997 in two forest types common in the central Appalachian hardwood forests, to examine how these important forests respond to depletion of nutrients such as calcium and...
Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium
International Nuclear Information System (INIS)
Crist, K.C.
1984-10-01
An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables
Comparison of two lung clearance models based on the dissolution rates of oxidized depleted uranium
Energy Technology Data Exchange (ETDEWEB)
Crist, K.C.
1984-10-01
An in-vitro dissolution study was conducted on two respirable oxidized depleted uranium samples. The dissolution rates generated from this study were then utilized in the International Commission on Radiological Protection Task Group lung clearance model and a lung clearance model proposed by Cuddihy. Predictions from both models based on the dissolution rates of the amount of oxidized depleted uranium that would be cleared to blood from the pulmonary region following an inhalation exposure were compared. It was found that the predictions made by both models differed considerably. The difference between the predictions was attributed to the differences in the way each model perceives the clearance from the pulmonary region. 33 references, 11 figures, 9 tables.
Acceptance and implementation of a system of planning computerized based on Monte Carlo
International Nuclear Information System (INIS)
Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.
2013-01-01
It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)
Streshinsky, Matthew; Ayazi, Ali; Xuan, Zhe; Lim, Andy Eu-Jin; Lo, Guo-Qiang; Baehr-Jones, Tom; Hochberg, Michael
2013-02-11
We present measurements of the nonlinear distortions of a traveling-wave silicon Mach-Zehnder modulator based on the carrier depletion effect. Spurious free dynamic range for second harmonic distortion of 82 dB·Hz(1/2) is seen, and 97 dB·Hz(2/3) is measured for intermodulation distortion. This measurement represents an improvement of 20 dB over the previous best result in silicon. We also show that the linearity of a silicon traveling wave Mach-Zehnder modulator can be improved by differentially driving it. These results suggest silicon may be a suitable platform for analog optical applications.
Ultra Depleted Mantle at the Gakkel Ridge Based on Hafnium and Neodymium Isotopes
Salters, V. J.; Dick, H. J.
2011-12-01
basalts. The relatively depleted nature of the peridotites requires that a relatively large amount of peridotite has to contribute to the aggregated basaltic melt. Apart from documenting the heterogeneous nature of the MORB mantle, it also indicates that in addition to MORB like mantle a by far more depleted mantle exists. Based on abyssal peridotite trace element compositions and based on melting calculations, these extreme peridotites could have a complicated history and this might not be the first time they are passing through a ridge melting regime. It is likely they may represent ancient residual lithosphere. Because these peridotites are already depleted they will contribute little in terms of major elements or incompatible trace elements to the melts. The Hf-Nd isotope variations in MORB, whereby MORB from individual ridge segments form parallel arrays offset in Hf-isotopic composition, can also be explained by the existence of a highly depleted component like ancient Residual Lithosphere: called ReLish (Salters et al., 2011, G3). Liu et al (2008) Nature, 452, 311-315 Stracke et al. (2011) Earth Plan. Sci. Lett. 308, 359-368.
Energy Technology Data Exchange (ETDEWEB)
Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.
2013-07-01
It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)
Response matrix Monte Carlo based on a general geometry local calculation for electron transport
International Nuclear Information System (INIS)
Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.
1991-01-01
A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs
Determination of the spatial response of neutron based analysers using a Monte Carlo based method
Tickner
2000-10-01
One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal.
Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation
Minasny, B.; Vrugt, J.A.; McBratney, A.B.
2011-01-01
This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior
Development of Monte Carlo-based pebble bed reactor fuel management code
International Nuclear Information System (INIS)
Setiadipura, Topan; Obara, Toru
2014-01-01
Highlights: • A new Monte Carlo-based fuel management code for OTTO cycle pebble bed reactor was developed. • The double-heterogeneity was modeled using statistical method in MVP-BURN code. • The code can perform analysis of equilibrium and non-equilibrium phase. • Code-to-code comparisons for Once-Through-Then-Out case were investigated. • Ability of the code to accommodate the void cavity was confirmed. - Abstract: A fuel management code for pebble bed reactors (PBRs) based on the Monte Carlo method has been developed in this study. The code, named Monte Carlo burnup analysis code for PBR (MCPBR), enables a simulation of the Once-Through-Then-Out (OTTO) cycle of a PBR from the running-in phase to the equilibrium condition. In MCPBR, a burnup calculation based on a continuous-energy Monte Carlo code, MVP-BURN, is coupled with an additional utility code to be able to simulate the OTTO cycle of PBR. MCPBR has several advantages in modeling PBRs, namely its Monte Carlo neutron transport modeling, its capability of explicitly modeling the double heterogeneity of the PBR core, and its ability to model different axial fuel speeds in the PBR core. Analysis at the equilibrium condition of the simplified PBR was used as the validation test of MCPBR. The calculation results of the code were compared with the results of diffusion-based fuel management PBR codes, namely the VSOP and PEBBED codes. Using JENDL-4.0 nuclide library, MCPBR gave a 4.15% and 3.32% lower k eff value compared to VSOP and PEBBED, respectively. While using JENDL-3.3, MCPBR gave a 2.22% and 3.11% higher k eff value compared to VSOP and PEBBED, respectively. The ability of MCPBR to analyze neutron transport in the top void of the PBR core and its effects was also confirmed
International Nuclear Information System (INIS)
Stern, R.E.; Song, J.; Work, D.B.
2017-01-01
The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.
Development of the point-depletion code DEPTH
Energy Technology Data Exchange (ETDEWEB)
She, Ding [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Wang, Kan, E-mail: wangkan@mail.tsinghua.edu.cn [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Yu, Ganglin [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China)
2013-05-15
Highlights: ► The DEPTH code has been developed for the large-scale depletion system. ► DEPTH uses the data library which is convenient to couple with MC codes. ► TTA and matrix exponential methods are implemented and compared. ► DEPTH is able to calculate integral quantities based on the matrix inverse. ► Code-to-code comparisons prove the accuracy and efficiency of DEPTH. -- Abstract: The burnup analysis is an important aspect in reactor physics, which is generally done by coupling of transport calculations and point-depletion calculations. DEPTH is a newly-developed point-depletion code of handling large burnup depletion systems and detailed depletion chains. For better coupling with Monte Carlo transport codes, DEPTH uses data libraries based on the combination of ORIGEN-2 and ORIGEN-S and allows users to assign problem-dependent libraries for each depletion step. DEPTH implements various algorithms of treating the stiff depletion systems, including the Transmutation trajectory analysis (TTA), the Chebyshev Rational Approximation Method (CRAM), the Quadrature-based Rational Approximation Method (QRAM) and the Laguerre Polynomial Approximation Method (LPAM). Three different modes are supported by DEPTH to execute the decay, constant flux and constant power calculations. In addition to obtaining the instantaneous quantities of the radioactivity, decay heats and reaction rates, DEPTH is able to calculate the integral quantities by a time-integrated solver. Through calculations compared with ORIGEN-2, the validity of DEPTH in point-depletion calculations is proved. The accuracy and efficiency of depletion algorithms are also discussed. In addition, an actual pin-cell burnup case is calculated to illustrate the DEPTH code performance in coupling with the RMC Monte Carlo code.
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-07
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
Research on reactor physics analysis method based on Monte Carlo homogenization
International Nuclear Information System (INIS)
Ye Zhimin; Zhang Peng
2014-01-01
In order to meet the demand of nuclear energy market in the future, many new concepts of nuclear energy systems has been put forward. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multigroup cross section libraries. Due to its strong geometry modeling capability and the application of continuous energy cross section libraries, the Monte Carlo method has been widely used in reactor physics calculations, and more and more researches on Monte Carlo method has been carried out. Neutronics-thermal hydraulics coupling analysis based on Monte Carlo method has been realized. However, it still faces the problems of long computation time and slow convergence which make it not applicable to the reactor core fuel management simulations. Drawn from the deterministic core analysis method, a new two-step core analysis scheme is proposed in this work. Firstly, Monte Carlo simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Secondly, the core diffusion calculations can be done with these multigroup cross sections. The new scheme can achieve high efficiency while maintain acceptable precision, so it can be used as an effective tool for the design and analysis of innovative nuclear energy systems. Numeric tests have been done in this work to verify the new scheme. (authors)
Radial-based tail methods for Monte Carlo simulations of cylindrical interfaces
Goujon, Florent; Bêche, Bruno; Malfreyt, Patrice; Ghoufi, Aziz
2018-03-01
In this work, we implement for the first time the radial-based tail methods for Monte Carlo simulations of cylindrical interfaces. The efficiency of this method is then evaluated through the calculation of surface tension and coexisting properties. We show that the inclusion of tail corrections during the course of the Monte Carlo simulation impacts the coexisting and the interfacial properties. We establish that the long range corrections to the surface tension are the same order of magnitude as those obtained from planar interface. We show that the slab-based tail method does not amend the localization of the Gibbs equimolar dividing surface. Additionally, a non-monotonic behavior of surface tension is exhibited as a function of the radius of the equimolar dividing surface.
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes
Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...
Present status of transport code development based on Monte Carlo method
International Nuclear Information System (INIS)
Nakagawa, Masayuki
1985-01-01
The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)
Directory of Open Access Journals (Sweden)
He Deyu
2016-09-01
Full Text Available Assessing the risks of steering system faults in underwater vehicles is a human-machine-environment (HME systematic safety field that studies faults in the steering system itself, the driver’s human reliability (HR and various environmental conditions. This paper proposed a fault risk assessment method for an underwater vehicle steering system based on virtual prototyping and Monte Carlo simulation. A virtual steering system prototype was established and validated to rectify a lack of historic fault data. Fault injection and simulation were conducted to acquire fault simulation data. A Monte Carlo simulation was adopted that integrated randomness due to the human operator and environment. Randomness and uncertainty of the human, machine and environment were integrated in the method to obtain a probabilistic risk indicator. To verify the proposed method, a case of stuck rudder fault (SRF risk assessment was studied. This method may provide a novel solution for fault risk assessment of a vehicle or other general HME system.
ERSN-OpenMC, a Java-based GUI for OpenMC Monte Carlo code
Directory of Open Access Journals (Sweden)
Jaafar EL Bakkali
2016-07-01
Full Text Available OpenMC is a new Monte Carlo transport particle simulation code focused on solving two types of neutronic problems mainly the k-eigenvalue criticality fission source problems and external fixed fission source problems. OpenMC does not have any Graphical User Interface and the creation of one is provided by our java-based application named ERSN-OpenMC. The main feature of this application is to provide to the users an easy-to-use and flexible graphical interface to build better and faster simulations, with less effort and great reliability. Additionally, this graphical tool was developed with several features, as the ability to automate the building process of OpenMC code and related libraries as well as the users are given the freedom to customize their installation of this Monte Carlo code. A full description of the ERSN-OpenMC application is presented in this paper.
Espel, Federico Puente
The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods
R and D of a pixel sensor based on 0.15μm fully depleted SOI technology
International Nuclear Information System (INIS)
Tsuboyama, Toru; Arai, Yasuo; Fukuda, Koichi; Hara, Kazuhiko; Hayashi, Hirokazu; Hazumi, Masashi; Ida, Jiro; Ikeda, Hirokazu; Ikegami, Yoichi; Ishino, Hirokazu; Kawasaki, Takeo; Kohriki, Takashi; Komatsubara, Hirotaka; Martin, Elena; Miyake, Hideki; Mochizuki, Ai; Ohno, Morifumi; Saegusa, Yuuji; Tajima, Hiro; Tajima, Osamu
2007-01-01
Development of a monolithic pixel detector based on SOI (silicon on insulator) technology was started at KEK in 2005. The substrate of the SOI wafer is used as a radiation sensor. At end of 2005, we submitted several test-structure group (TEG) chips for the 150 nm, fully depleted CMOS process. The TEG designs and preliminary results are presented
Monte Carlo SURE-based parameter selection for parallel magnetic resonance imaging reconstruction.
Weller, Daniel S; Ramani, Sathish; Nielsen, Jon-Fredrik; Fessler, Jeffrey A
2014-05-01
Regularizing parallel magnetic resonance imaging (MRI) reconstruction significantly improves image quality but requires tuning parameter selection. We propose a Monte Carlo method for automatic parameter selection based on Stein's unbiased risk estimate that minimizes the multichannel k-space mean squared error (MSE). We automatically tune parameters for image reconstruction methods that preserve the undersampled acquired data, which cannot be accomplished using existing techniques. We derive a weighted MSE criterion appropriate for data-preserving regularized parallel imaging reconstruction and the corresponding weighted Stein's unbiased risk estimate. We describe a Monte Carlo approximation of the weighted Stein's unbiased risk estimate that uses two evaluations of the reconstruction method per candidate parameter value. We reconstruct images using the denoising sparse images from GRAPPA using the nullspace method (DESIGN) and L1 iterative self-consistent parallel imaging (L1 -SPIRiT). We validate Monte Carlo Stein's unbiased risk estimate against the weighted MSE. We select the regularization parameter using these methods for various noise levels and undersampling factors and compare the results to those using MSE-optimal parameters. Our method selects nearly MSE-optimal regularization parameters for both DESIGN and L1 -SPIRiT over a range of noise levels and undersampling factors. The proposed method automatically provides nearly MSE-optimal choices of regularization parameters for data-preserving nonlinear parallel MRI reconstruction methods. Copyright © 2013 Wiley Periodicals, Inc.
Tennant, Marc; Kruger, Estie
2013-02-01
This study developed a Monte Carlo simulation approach to examining the prevalence and incidence of dental decay using Australian children as a test environment. Monte Carlo simulation has been used for a half a century in particle physics (and elsewhere); put simply, it is the probability for various population-level outcomes seeded randomly to drive the production of individual level data. A total of five runs of the simulation model for all 275,000 12-year-olds in Australia were completed based on 2005-2006 data. Measured on average decayed/missing/filled teeth (DMFT) and DMFT of highest 10% of sample (Sic10) the runs did not differ from each other by more than 2% and the outcome was within 5% of the reported sampled population data. The simulations rested on the population probabilities that are known to be strongly linked to dental decay, namely, socio-economic status and Indigenous heritage. Testing the simulated population found DMFT of all cases where DMFT0 was 2.3 (n = 128,609) and DMFT for Indigenous cases only was 1.9 (n = 13,749). In the simulation population the Sic25 was 3.3 (n = 68,750). Monte Carlo simulations were created in particle physics as a computational mathematical approach to unknown individual-level effects by resting a simulation on known population-level probabilities. In this study a Monte Carlo simulation approach to childhood dental decay was built, tested and validated. © 2013 FDI World Dental Federation.
The future of new calculation concepts in dosimetry based on the Monte Carlo Methods
International Nuclear Information System (INIS)
Makovicka, L.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Vasseur, A.; Sauget, M.; Martin, E.; Gschwind, R.; Henriet, J.; Salomon, M.
2009-01-01
Monte Carlo codes, precise but slow, are very important tools in the vast majority of specialities connected to Radiation Physics, Radiation Protection and Dosimetry. A discussion about some other computing solutions is carried out; solutions not only based on the enhancement of computer power, or on the 'biasing'used for relative acceleration of these codes (in the case of photons), but on more efficient methods (A.N.N. - artificial neural network, C.B.R. - case-based reasoning - or other computer science techniques) already and successfully used for a long time in other scientific or industrial applications and not only Radiation Protection or Medical Dosimetry. (authors)
Energy-based truncation of multi-determinant wavefunctions in quantum Monte Carlo.
Per, Manolo C; Cleland, Deidre M
2017-04-28
We present a method for truncating large multi-determinant expansions for use in diffusion Monte Carlo calculations. Current approaches use wavefunction-based criteria to perform the truncation. Our method is more intuitively based on the contribution each determinant makes to the total energy. We show that this approach gives consistent behaviour across systems with varying correlation character, which leads to effective error cancellation in energy differences. This is demonstrated through accurate calculations of the electron affinity of oxygen and the atomisation energy of the carbon dimer. The approach is simple and easy to implement, requiring only quantities already accessible in standard configuration interaction calculations.
Monte Carlo capabilities of the SCALE code system
International Nuclear Information System (INIS)
Rearden, B.T.; Petrie, L.M.; Peplow, D.E.; Bekar, K.B.; Wiarda, D.; Celik, C.; Perfetti, C.M.; Ibrahim, A.M.; Hart, S.W.D.; Dunn, M.E.; Marshall, W.J.
2015-01-01
Highlights: • Foundational Monte Carlo capabilities of SCALE are described. • Improvements in continuous-energy treatments are detailed. • New methods for problem-dependent temperature corrections are described. • New methods for sensitivity analysis and depletion are described. • Nuclear data, users interfaces, and quality assurance activities are summarized. - Abstract: SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2
Monte Carlo tests of the Rasch model based on scalability coefficients
DEFF Research Database (Denmark)
Christensen, Karl Bang; Kreiner, Svend
2010-01-01
that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence......For item responses fitting the Rasch model, the assumptions underlying the Mokken model of double monotonicity are met. This makes non-parametric item response theory a natural starting-point for Rasch item analysis. This paper studies scalability coefficients based on Loevinger's H coefficient...
International Nuclear Information System (INIS)
Li, Jing; Ma, Yong; Zhou, Qunqun; Zhou, Bo; Wang, Hongyuan
2012-01-01
Channel capacity of ocean water is limited by propagation distance and optical properties. Previous studies on this problem are based on water-tank experiments with different amounts of Maalox antacid. However, propagation distance is limited by the experimental set-up and the optical properties are different from ocean water. Therefore, the experiment result is not accurate for the physical design of underwater wireless communications links. This letter developed a Monte Carlo model to study channel capacity of underwater optical communications. Moreover, this model can flexibly configure various parameters of transmitter, receiver and channel, and is suitable for physical underwater optical communications links design. (paper)
Monte Carlo simulation based reliability evaluation in a multi-bilateral contracts market
International Nuclear Information System (INIS)
Goel, L.; Viswanath, P.A.; Wang, P.
2004-01-01
This paper presents a time sequential Monte Carlo simulation technique to evaluate customer load point reliability in multi-bilateral contracts market. The effects of bilateral transactions, reserve agreements, and the priority commitments of generating companies on customer load point reliability have been investigated. A generating company with bilateral contracts is modelled as an equivalent time varying multi-state generation (ETMG). A procedure to determine load point reliability based on ETMG has been developed. The developed procedure is applied to a reliability test system to illustrate the technique. Representing each bilateral contract by an ETMG provides flexibility in determining the reliability at various customer load points. (authors)
Li, Jing; Ma, Yong; Zhou, Qunqun; Zhou, Bo; Wang, Hongyuan
2012-01-01
Channel capacity of ocean water is limited by propagation distance and optical properties. Previous studies on this problem are based on water-tank experiments with different amounts of Maalox antacid. However, propagation distance is limited by the experimental set-up and the optical properties are different from ocean water. Therefore, the experiment result is not accurate for the physical design of underwater wireless communications links. This letter developed a Monte Carlo model to study channel capacity of underwater optical communications. Moreover, this model can flexibly configure various parameters of transmitter, receiver and channel, and is suitable for physical underwater optical communications links design.
Geant4 based Monte Carlo simulation for verifying the modified sum-peak method.
Aso, Tsukasa; Ogata, Yoshimune; Makino, Ryuta
2018-04-01
The modified sum-peak method can practically estimate radioactivity by using solely the peak and the sum peak count rate. In order to efficiently verify the method in various experimental conditions, a Geant4 based Monte Carlo simulation for a high-purity germanium detector system was applied. The energy spectra in the detector were simulated for a 60 Co point source in various source to detector distances. The calculated radioactivity shows good agreement with the number of decays in the simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Considerable variation in NNT - A study based on Monte Carlo simulations
DEFF Research Database (Denmark)
Wisloff, T.; Aalen, O. O.; Sønbø Kristiansen, Ivar
2011-01-01
Objective: The aim of this analysis was to explore the variation in measures of effect, such as the number-needed-to-treat (NNT) and the relative risk (RR). Study Design and Setting: We performed Monte Carlo simulations of therapies using binominal distributions based on different true absolute...... is used to express treatment effectiveness, it has a regular distribution around the expected value for various values of true ARR, n, and p(0). The equivalent distribution of NNT is by definition nonconnected at zero and is also irregular. The probability that the observed treatment effectiveness is zero...
Potential of decaying wood to restore root-available base cations in depleted forest soils
Walter C. Shortle; Kevin T. Smith; Jody Jellison; Jonathan S. Schilling
2012-01-01
The depletion of root-available Ca in northern forest soils exposed to decades of increased acid deposition adversely affects forest health and productivity. Laboratory studies indicated the potential of wood-decay fungi to restore lost Ca. This study presents changes in concentration of Ca, Mg, and K in sapwood of red spruce (Picea rubens Sarg.),...
PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation
Energy Technology Data Exchange (ETDEWEB)
Espana, S; Herraiz, J L; Vicente, E; Udias, J M [Grupo de Fisica Nuclear, Departmento de Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid, Madrid (Spain); Vaquero, J J; Desco, M [Unidad de Medicina y CirugIa Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2009-03-21
Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.
X-ray imaging plate performance investigation based on a Monte Carlo simulation tool
Energy Technology Data Exchange (ETDEWEB)
Yao, M., E-mail: philippe.duvauchelle@insa-lyon.fr [Laboratoire Vibration Acoustique (LVA), INSA de Lyon, 25 Avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Duvauchelle, Ph.; Kaftandjian, V. [Laboratoire Vibration Acoustique (LVA), INSA de Lyon, 25 Avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Peterzol-Parmentier, A. [AREVA NDE-Solutions, 4 Rue Thomas Dumorey, 71100 Chalon-sur-Saône (France); Schumm, A. [EDF R& D SINETICS, 1 Avenue du Général de Gaulle, 92141 Clamart Cedex (France)
2015-01-01
Computed radiography (CR) based on imaging plate (IP) technology represents a potential replacement technique for traditional film-based industrial radiography. For investigating the IP performance especially at high energies, a Monte Carlo simulation tool based on PENELOPE has been developed. This tool tracks separately direct and secondary radiations, and monitors the behavior of different particles. The simulation output provides 3D distribution of deposited energy in IP and evaluation of radiation spectrum propagation allowing us to visualize the behavior of different particles and the influence of different elements. A detailed analysis, on the spectral and spatial responses of IP at different energies up to MeV, has been performed. - Highlights: • A Monte Carlo tool for imaging plate (IP) performance investigation is presented. • The tool outputs 3D maps of energy deposition in IP due to different signals. • The tool also provides the transmitted spectra along the radiation propagation. • An industrial imaging case is simulated with the presented tool. • A detailed analysis, on the spectral and spatial responses of IP, is presented.
Monte Carlo based, patient-specific RapidArc QA using Linac log files.
Teke, Tony; Bergman, Alanah M; Kwa, William; Gill, Bradford; Duzenli, Cheryl; Popescu, I Antoniu
2010-01-01
A Monte Carlo (MC) based QA process to validate the dynamic beam delivery accuracy for Varian RapidArc (Varian Medical Systems, Palo Alto, CA) using Linac delivery log files (DynaLog) is presented. Using DynaLog file analysis and MC simulations, the goal of this article is to (a) confirm that adequate sampling is used in the RapidArc optimization algorithm (177 static gantry angles) and (b) to assess the physical machine performance [gantry angle and monitor unit (MU) delivery accuracy]. Ten clinically acceptable RapidArc treatment plans were generated for various tumor sites and delivered to a water-equivalent cylindrical phantom on the treatment unit. Three Monte Carlo simulations were performed to calculate dose to the CT phantom image set: (a) One using a series of static gantry angles defined by 177 control points with treatment planning system (TPS) MLC control files (planning files), (b) one using continuous gantry rotation with TPS generated MLC control files, and (c) one using continuous gantry rotation with actual Linac delivery log files. Monte Carlo simulated dose distributions are compared to both ionization chamber point measurements and with RapidArc TPS calculated doses. The 3D dose distributions were compared using a 3D gamma-factor analysis, employing a 3%/3 mm distance-to-agreement criterion. The dose difference between MC simulations, TPS, and ionization chamber point measurements was less than 2.1%. For all plans, the MC calculated 3D dose distributions agreed well with the TPS calculated doses (gamma-factor values were less than 1 for more than 95% of the points considered). Machine performance QA was supplemented with an extensive DynaLog file analysis. A DynaLog file analysis showed that leaf position errors were less than 1 mm for 94% of the time and there were no leaf errors greater than 2.5 mm. The mean standard deviation in MU and gantry angle were 0.052 MU and 0.355 degrees, respectively, for the ten cases analyzed. The accuracy and
Energy Technology Data Exchange (ETDEWEB)
Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.
2013-07-01
The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)
International Nuclear Information System (INIS)
Boudreau, C; Heath, E; Seuntjens, J; Ballivy, O; Parker, W
2005-01-01
The PEREGRINE Monte Carlo dose-calculation system (North American Scientific, Cranberry Township, PA) is the first commercially available Monte Carlo dose-calculation code intended specifically for intensity modulated radiotherapy (IMRT) treatment planning and quality assurance. In order to assess the impact of Monte Carlo based dose calculations for IMRT clinical cases, dose distributions for 11 head and neck patients were evaluated using both PEREGRINE and the CORVUS (North American Scientific, Cranberry Township, PA) finite size pencil beam (FSPB) algorithm with equivalent path-length (EPL) inhomogeneity correction. For the target volumes, PEREGRINE calculations predict, on average, a less than 2% difference in the calculated mean and maximum doses to the gross tumour volume (GTV) and clinical target volume (CTV). An average 16% ± 4% and 12% ± 2% reduction in the volume covered by the prescription isodose line was observed for the GTV and CTV, respectively. Overall, no significant differences were noted in the doses to the mandible and spinal cord. For the parotid glands, PEREGRINE predicted a 6% ± 1% increase in the volume of tissue receiving a dose greater than 25 Gy and an increase of 4% ± 1% in the mean dose. Similar results were noted for the brainstem where PEREGRINE predicted a 6% ± 2% increase in the mean dose. The observed differences between the PEREGRINE and CORVUS calculated dose distributions are attributed to secondary electron fluence perturbations, which are not modelled by the EPL correction, issues of organ outlining, particularly in the vicinity of air cavities, and differences in dose reporting (dose to water versus dose to tissue type)
Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model
Morin, Mario A.; Ficarazzo, Francesco
2006-04-01
Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.
Comment on “A study on tetrahedron-based inhomogeneous Monte-Carlo optical simulation”
Fang, Qianqian
2011-01-01
The Monte Carlo (MC) method is a popular approach to modeling photon propagation inside general turbid media, such as human tissue. Progress had been made in the past year with the independent proposals of two mesh-based Monte Carlo methods employing ray-tracing techniques. Both methods have shown improvements in accuracy and efficiency in modeling complex domains. A recent paper by Shen and Wang [Biomed. Opt. Express 2, 44 (2011)] reported preliminary results towards the cross-validation of the two mesh-based MC algorithms and software implementations, showing a 3–6 fold speed difference between the two software packages. In this comment, we share our views on unbiased software comparisons and discuss additional issues such as the use of pre-computed data, interpolation strategies, impact of compiler settings, use of Russian roulette, memory cost and potential pitfalls in measuring algorithm performance. Despite key differences between the two algorithms in handling of non-tetrahedral meshes, we found that they share similar structure and performance for tetrahedral meshes. A significant fraction of the observed speed differences in the mentioned article was the result of inconsistent use of compilers and libraries. PMID:21559136
Comment on "A study on tetrahedron-based inhomogeneous Monte-Carlo optical simulation".
Fang, Qianqian
2011-04-19
The Monte Carlo (MC) method is a popular approach to modeling photon propagation inside general turbid media, such as human tissue. Progress had been made in the past year with the independent proposals of two mesh-based Monte Carlo methods employing ray-tracing techniques. Both methods have shown improvements in accuracy and efficiency in modeling complex domains. A recent paper by Shen and Wang [Biomed. Opt. Express 2, 44 (2011)] reported preliminary results towards the cross-validation of the two mesh-based MC algorithms and software implementations, showing a 3-6 fold speed difference between the two software packages. In this comment, we share our views on unbiased software comparisons and discuss additional issues such as the use of pre-computed data, interpolation strategies, impact of compiler settings, use of Russian roulette, memory cost and potential pitfalls in measuring algorithm performance. Despite key differences between the two algorithms in handling of non-tetrahedral meshes, we found that they share similar structure and performance for tetrahedral meshes. A significant fraction of the observed speed differences in the mentioned article was the result of inconsistent use of compilers and libraries.
International Nuclear Information System (INIS)
Wong, K. M.; Chim, W. K.
2007-01-01
An approach for fast and accurate carrier profiling using deep-depletion analytical modeling of scanning capacitance microscopy (SCM) measurements is shown for an ultrashallow p-n junction with a junction depth of less than 30 nm and a profile steepness of about 3 nm per decade change in carrier concentration. In addition, the analytical model is also used to extract the SCM dopant profiles of three other p-n junction samples with different junction depths and profile steepnesses. The deep-depletion effect arises from rapid changes in the bias applied between the sample and probe tip during SCM measurements. The extracted carrier profile from the model agrees reasonably well with the more accurate carrier profile from inverse modeling and the dopant profile from secondary ion mass spectroscopy measurements
Sensitivity theory for reactor burnup analysis based on depletion perturbation theory
International Nuclear Information System (INIS)
Yang, Wonsik.
1989-01-01
The large computational effort involved in the design and analysis of advanced reactor configurations motivated the development of Depletion Perturbation Theory (DPT) for general fuel cycle analysis. The work here focused on two important advances in the current methods. First, the adjoint equations were developed for using the efficient linear flux approximation to decouple the neutron/nuclide field equations. And second, DPT was extended to the constrained equilibrium cycle which is important for the consistent comparison and evaluation of alternative reactor designs. Practical strategies were formulated for solving the resulting adjoint equations and a computer code was developed for practical applications. In all cases analyzed, the sensitivity coefficients generated by DPT were in excellent agreement with the results of exact calculations. The work here indicates that for a given core response, the sensitivity coefficients to all input parameters can be computed by DPT with a computational effort similar to a single forward depletion calculation
Hedde, Per Niklas; Dörlich, René M; Blomley, Rosmarie; Gradl, Dietmar; Oppong, Emmanuel; Cato, Andrew C B; Nienhaus, G Ulrich
2013-01-01
Raster image correlation spectroscopy is a powerful tool to study fast molecular dynamics such as protein diffusion or receptor-ligand interactions inside living cells and tissues. By analysing spatio-temporal correlations of fluorescence intensity fluctuations from raster-scanned microscopy images, molecular motions can be revealed in a spatially resolved manner. Because of the diffraction-limited optical resolution, however, conventional raster image correlation spectroscopy can only distinguish larger regions of interest and requires low fluorophore concentrations in the nanomolar range. Here, to overcome these limitations, we combine raster image correlation spectroscopy with stimulated emission depletion microscopy. With imaging experiments on model membranes and live cells, we show that stimulated emission depletion-raster image correlation spectroscopy offers an enhanced multiplexing capability because of the enhanced spatial resolution as well as access to 10-100 times higher fluorophore concentrations.
CAD-based Monte Carlo automatic modeling method based on primitive solid
International Nuclear Information System (INIS)
Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang
2016-01-01
Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.
Pattern-oriented Agent-based Monte Carlo simulation of Cellular Redox Environment
DEFF Research Database (Denmark)
Tang, Jiaowei; Holcombe, Mike; Boonen, Harrie C.M.
] could be very important factors. In our project, an agent-based Monte Carlo modeling [6] is offered to study the dynamic relationship between extracellular and intracellular redox and complex networks of redox reactions. In the model, pivotal redox-related reactions will be included, and the reactants....../CYSS) and mitochondrial redox couples. Evidence suggests that both intracellular and extracellular redox can affect overall cell redox state. How redox is communicated between extracellular and intracellular environments is still a matter of debate. Some researchers conclude based on experimental data....... Because complex networks and dynamics of redox still is not completely understood , results of existing experiments will be used to validate the modeling according to ideas in pattern-oriented agent-based modeling[8]. The simulation of this model is computational intensive, thus an application 'FLAME...
CAD-Based Monte Carlo Neutron Transport KSTAR Analysis for KSTAR
Seo, Geon Ho; Choi, Sung Hoon; Shim, Hyung Jin
2017-09-01
The Monte Carlo (MC) neutron transport analysis for a complex nuclear system such as fusion facility may require accurate modeling of its complicated geometry. In order to take advantage of modeling capability of the computer aided design (CAD) system for the MC neutronics analysis, the Seoul National University MC code, McCARD, has been augmented with a CAD-based geometry processing module by imbedding the OpenCASCADE CAD kernel. In the developed module, the CAD geometry data are internally converted to the constructive solid geometry model with help of the CAD kernel. An efficient cell-searching algorithm is devised for the void space treatment. The performance of the CAD-based McCARD calculations are tested for the Korea Superconducting Tokamak Advanced Research device by comparing with results of the conventional MC calculations using a text-based geometry input.
Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
Energy Technology Data Exchange (ETDEWEB)
Sohlberg, A; Watabe, H; Iida, H [National Cardiovascular Center Research Institute, 5-7-1 Fujishiro-dai, Suita City, 565-8565 Osaka (Japan)], E-mail: antti.sohlberg@hermesmedical.com
2008-07-21
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling {sup 99m}Tc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT. (note)
An Approach in Radiation Therapy Treatment Planning: A Fast, GPU-Based Monte Carlo Method.
Karbalaee, Mojtaba; Shahbazi-Gahrouei, Daryoush; Tavakoli, Mohammad B
2017-01-01
An accurate and fast radiation dose calculation is essential for successful radiation radiotherapy. The aim of this study was to implement a new graphic processing unit (GPU) based radiation therapy treatment planning for accurate and fast dose calculation in radiotherapy centers. A program was written for parallel running based on GPU. The code validation was performed by EGSnrc/DOSXYZnrc. Moreover, a semi-automatic, rotary, asymmetric phantom was designed and produced using a bone, the lung, and the soft tissue equivalent materials. All measurements were performed using a Mapcheck dosimeter. The accuracy of the code was validated using the experimental data, which was obtained from the anthropomorphic phantom as the gold standard. The findings showed that, compared with those of DOSXYZnrc in the virtual phantom and for most of the voxels (>95%), GPU-based Monte Carlo method in dose calculation may be useful in routine radiation therapy centers as the core and main component of a treatment planning verification system.
GPU-based high performance Monte Carlo simulation in neutron transport
International Nuclear Information System (INIS)
Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A.
2009-01-01
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)
GPU-based high performance Monte Carlo simulation in neutron transport
Energy Technology Data Exchange (ETDEWEB)
Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br
2009-07-01
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)
International Nuclear Information System (INIS)
Zhu Feng; Yan Jiawei; Lu Miao; Zhou Yongliang; Yang Yang; Mao Bingwei
2011-01-01
Highlights: → A novel strategy based on a combination of interferent depleting and redox cycling is proposed for the plane-recessed microdisk array electrodes. → The strategy break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction. → The electrodes enhance the current signal by redox cycling. → The electrodes can work regardless of the reversibility of interfering species. - Abstract: The fabrication, characterization and application of the plane-recessed microdisk array electrodes for selective detection are demonstrated. The electrodes, fabricated by lithographic microfabrication technology, are composed of a planar film electrode and a 32 x 32 recessed microdisk array electrode. Different from commonly used redox cycling operating mode for array configurations such as interdigitated array electrodes, a novel strategy based on a combination of interferent depleting and redox cycling is proposed for the electrodes with an appropriate configuration. The planar film electrode (the plane electrode) is used to deplete the interferent in the diffusion layer. The recessed microdisk array electrode (the microdisk array), locating within the diffusion layer of the plane electrode, works for detecting the target analyte in the interferent-depleted diffusion layer. In addition, the microdisk array overcomes the disadvantage of low current signal for a single microelectrode. Moreover, the current signal of the target analyte that undergoes reversible electron transfer can be enhanced due to the redox cycling between the plane electrode and the microdisk array. Based on the above working principle, the plane-recessed microdisk array electrodes break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction, which is a limitation of single redox cycling operating mode. The
Monte-Carlo-based uncertainty propagation with hierarchical models—a case study in dynamic torque
Klaus, Leonard; Eichstädt, Sascha
2018-04-01
For a dynamic calibration, a torque transducer is described by a mechanical model, and the corresponding model parameters are to be identified from measurement data. A measuring device for the primary calibration of dynamic torque, and a corresponding model-based calibration approach, have recently been developed at PTB. The complete mechanical model of the calibration set-up is very complex, and involves several calibration steps—making a straightforward implementation of a Monte Carlo uncertainty evaluation tedious. With this in mind, we here propose to separate the complete model into sub-models, with each sub-model being treated with individual experiments and analysis. The uncertainty evaluation for the overall model then has to combine the information from the sub-models in line with Supplement 2 of the Guide to the Expression of Uncertainty in Measurement. In this contribution, we demonstrate how to carry this out using the Monte Carlo method. The uncertainty evaluation involves various input quantities of different origin and the solution of a numerical optimisation problem.
Foucart, Francois
2018-04-01
General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.
A method based on Monte Carlo simulation for the determination of the G(E) function.
Chen, Wei; Feng, Tiancheng; Liu, Jun; Su, Chuanying; Tian, Yanjie
2015-02-01
The G(E) function method is a spectrometric method for the exposure dose estimation; this paper describes a method based on Monte Carlo method to determine the G(E) function of a 4″ × 4″ × 16″ NaI(Tl) detector. Simulated spectrums of various monoenergetic gamma rays in the region of 40 -3200 keV and the corresponding deposited energy in an air ball in the energy region of full-energy peak were obtained using Monte Carlo N-particle Transport Code. Absorbed dose rate in air was obtained according to the deposited energy and divided by counts of corresponding full-energy peak to get the G(E) function value at energy E in spectra. Curve-fitting software 1st0pt was used to determine coefficients of the G(E) function. Experimental results show that the calculated dose rates using the G(E) function determined by the authors' method are accordant well with those values obtained by ionisation chamber, with a maximum deviation of 6.31 %. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC
Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin
2014-06-01
Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.
Comparison of nonstationary generalized logistic models based on Monte Carlo simulation
Directory of Open Access Journals (Sweden)
S. Kim
2015-06-01
Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.
Monte Carlo-based dose calculation engine for minibeam radiation therapy.
Martínez-Rovira, I; Sempau, J; Prezado, Y
2014-02-01
Minibeam radiation therapy (MBRT) is an innovative radiotherapy approach based on the well-established tissue sparing effect of arrays of quasi-parallel micrometre-sized beams. In order to guide the preclinical trials in progress at the European Synchrotron Radiation Facility (ESRF), a Monte Carlo-based dose calculation engine has been developed and successfully benchmarked with experimental data in anthropomorphic phantoms. Additionally, a realistic example of treatment plan is presented. Despite the micron scale of the voxels used to tally dose distributions in MBRT, the combination of several efficiency optimisation methods allowed to achieve acceptable computation times for clinical settings (approximately 2 h). The calculation engine can be easily adapted with little or no programming effort to other synchrotron sources or for dose calculations in presence of contrast agents. Copyright © 2013 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations
International Nuclear Information System (INIS)
Reims, N; Sukowski, F; Uhlmann, N
2011-01-01
Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.
GGEMS-Brachy: GPU GEant4-based Monte Carlo simulation for brachytherapy applications
International Nuclear Information System (INIS)
Lemaréchal, Yannick; Bert, Julien; Schick, Ulrike; Pradier, Olivier; Garcia, Marie-Paule; Boussion, Nicolas; Visvikis, Dimitris; Falconnet, Claire; Després, Philippe; Valeri, Antoine
2015-01-01
In brachytherapy, plans are routinely calculated using the AAPM TG43 formalism which considers the patient as a simple water object. An accurate modeling of the physical processes considering patient heterogeneity using Monte Carlo simulation (MCS) methods is currently too time-consuming and computationally demanding to be routinely used. In this work we implemented and evaluated an accurate and fast MCS on Graphics Processing Units (GPU) for brachytherapy low dose rate (LDR) applications. A previously proposed Geant4 based MCS framework implemented on GPU (GGEMS) was extended to include a hybrid GPU navigator, allowing navigation within voxelized patient specific images and analytically modeled 125 I seeds used in LDR brachytherapy. In addition, dose scoring based on track length estimator including uncertainty calculations was incorporated. The implemented GGEMS-brachy platform was validated using a comparison with Geant4 simulations and reference datasets. Finally, a comparative dosimetry study based on the current clinical standard (TG43) and the proposed platform was performed on twelve prostate cancer patients undergoing LDR brachytherapy. Considering patient 3D CT volumes of 400 × 250 × 65 voxels and an average of 58 implanted seeds, the mean patient dosimetry study run time for a 2% dose uncertainty was 9.35 s (≈500 ms 10 −6 simulated particles) and 2.5 s when using one and four GPUs, respectively. The performance of the proposed GGEMS-brachy platform allows envisaging the use of Monte Carlo simulation based dosimetry studies in brachytherapy compatible with clinical practice. Although the proposed platform was evaluated for prostate cancer, it is equally applicable to other LDR brachytherapy clinical applications. Future extensions will allow its application in high dose rate brachytherapy applications. (paper)
International Nuclear Information System (INIS)
Shen, W.
2004-01-01
A micro-depletion model has been developed and implemented in the *SIMULATE module of RFSP to use WIMS-calculated lattice properties in history-based local-parameter calculations. A comparison between the micro-depletion and WIMS results for each type of lattice cross section and for the infinite-lattice multiplication factor was also performed for a fuel similar to that which may be used in the ACR fuel. The comparison shows that the micro-depletion calculation agrees well with the WIMS-IST calculation. The relative differences in k-infinity are within ±0.5 mk and ±0.9 mk for perturbation and depletion calculations, respectively. The micro-depletion model gives the *SIMULATE module of RFSP the capability to use WIMS-calculated lattice properties in history-based local-parameter calculations without resorting to the Simple-Cell-Methodology (SCM) surrogate for CANDU core-tracking simulations. (author)
Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa
2011-08-01
In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.
Electric conduction in semiconductors: a pedagogical model based on the Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Capizzo, M C; Sperandeo-Mineo, R M; Zarcone, M [UoP-PERG, University of Palermo Physics Education Research Group and Dipartimento di Fisica e Tecnologie Relative, Universita di Palermo (Italy)], E-mail: sperandeo@difter.unipa.it
2008-05-15
We present a pedagogic approach aimed at modelling electric conduction in semiconductors in order to describe and explain some macroscopic properties, such as the characteristic behaviour of resistance as a function of temperature. A simple model of the band structure is adopted for the generation of electron-hole pairs as well as for the carrier transport in moderate electric fields. The semiconductor behaviour is described by substituting the traditional statistical approach (requiring a deep mathematical background) with microscopic models, based on the Monte Carlo method, in which simple rules applied to microscopic particles and quasi-particles determine the macroscopic properties. We compare measurements of electric properties of matter with 'virtual experiments' built by using some models where the physical concepts can be presented at different formalization levels.
A Monte Carlo-based treatment-planning tool for ion beam therapy
Böhlen, T T; Dosanjh, M; Ferrari, A; Haberer, T; Parodi, K; Patera, V; Mairan, A
2013-01-01
Ion beam therapy, as an emerging radiation therapy modality, requires continuous efforts to develop and improve tools for patient treatment planning (TP) and research applications. Dose and fluence computation algorithms using the Monte Carlo (MC) technique have served for decades as reference tools for accurate dose computations for radiotherapy. In this work, a novel MC-based treatment-planning (MCTP) tool for ion beam therapy using the pencil beam scanning technique is presented. It allows single-field and simultaneous multiple-fields optimization for realistic patient treatment conditions and for dosimetric quality assurance for irradiation conditions at state-of-the-art ion beam therapy facilities. It employs iterative procedures that allow for the optimization of absorbed dose and relative biological effectiveness (RBE)-weighted dose using radiobiological input tables generated by external RBE models. Using a re-implementation of the local effect model (LEM), theMCTP tool is able to perform TP studies u...
Monte Carlo simulation of grating-based neutron phase contrast imaging at CPHS
International Nuclear Information System (INIS)
Zhang Ran; Chen Zhiqiang; Huang Zhifeng; Xiao Yongshun; Wang Xuewu; Wie Jie; Loong, C.-K.
2011-01-01
Since the launching of the Compact Pulsed Hadron Source (CPHS) project of Tsinghua University in 2009, works have begun on the design and engineering of an imaging/radiography instrument for the neutron source provided by CPHS. The instrument will perform basic tasks such as transmission imaging and computerized tomography. Additionally, we include in the design the utilization of coded-aperture and grating-based phase contrast methodology, as well as the options of prompt gamma-ray analysis and neutron-energy selective imaging. Previously, we had implemented the hardware and data-analysis software for grating-based X-ray phase contrast imaging. Here, we investigate Geant4-based Monte Carlo simulations of neutron refraction phenomena and then model the grating-based neutron phase contrast imaging system according to the classic-optics-based method. The simulated experimental results of the retrieving phase shift gradient information by five-step phase-stepping approach indicate the feasibility of grating-based neutron phase contrast imaging as an option for the cold neutron imaging instrument at the CPHS.
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
International Nuclear Information System (INIS)
Merheb, C; Petegnief, Y; Talbot, J N
2007-01-01
within 9%. For a 410-665 keV energy window, the measured sensitivity for a centred point source was 1.53% and mouse and rat scatter fractions were respectively 12.0% and 18.3%. The scattered photons produced outside the rat and mouse phantoms contributed to 24% and 36% of total simulated scattered coincidences. Simulated and measured single and prompt count rates agreed well for activities up to the electronic saturation at 110 MBq for the mouse and rat phantoms. Volumetric spatial resolution was 17.6 μL at the centre of the FOV with differences less than 6% between experimental and simulated spatial resolution values. The comprehensive evaluation of the Monte Carlo modelling of the Mosaic(TM) system demonstrates that the GATE package is adequately versatile and appropriate to accurately describe the response of an Anger logic based animal PET system
A zero-variance based scheme for Monte Carlo criticality simulations
Christoforou, S.
2010-01-01
The ability of the Monte Carlo method to solve particle transport problems by simulating the particle behaviour makes it a very useful technique in nuclear reactor physics. However, the statistical nature of Monte Carlo implies that there will always be a variance associated with the estimate
Monte Carlo based dosimetry and treatment planning for neutron capture therapy of brain tumors
International Nuclear Information System (INIS)
Zamenhof, R.G.; Clement, S.D.; Harling, O.K.; Brenner, J.F.; Wazer, D.E.; Madoc-Jones, H.; Yanch, J.C.
1990-01-01
Monte Carlo based dosimetry and computer-aided treatment planning for neutron capture therapy have been developed to provide the necessary link between physical dosimetric measurements performed on the MITR-II epithermal-neutron beams and the need of the radiation oncologist to synthesize large amounts of dosimetric data into a clinically meaningful treatment plan for each individual patient. Monte Carlo simulation has been employed to characterize the spatial dose distributions within a skull/brain model irradiated by an epithermal-neutron beam designed for neutron capture therapy applications. The geometry and elemental composition employed for the mathematical skull/brain model and the neutron and photon fluence-to-dose conversion formalism are presented. A treatment planning program, NCTPLAN, developed specifically for neutron capture therapy, is described. Examples are presented illustrating both one and two-dimensional dose distributions obtainable within the brain with an experimental epithermal-neutron beam, together with beam quality and treatment plan efficacy criteria which have been formulated for neutron capture therapy. The incorporation of three-dimensional computed tomographic image data into the treatment planning procedure is illustrated. The experimental epithermal-neutron beam has a maximum usable circular diameter of 20 cm, and with 30 ppm of B-10 in tumor and 3 ppm of B-10 in blood, it produces a beam-axis advantage depth of 7.4 cm, a beam-axis advantage ratio of 1.83, a global advantage ratio of 1.70, and an advantage depth RBE-dose rate to tumor of 20.6 RBE-cGy/min (cJ/kg-min). These characteristics make this beam well suited for clinical applications, enabling an RBE-dose of 2,000 RBE-cGy/min (cJ/kg-min) to be delivered to tumor at brain midline in six fractions with a treatment time of approximately 16 minutes per fraction
Kanjilal, Oindrila; Manohar, C. S.
2017-07-01
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations.
Implementation of GPU accelerated SPECT reconstruction with Monte Carlo-based scatter correction.
Bexelius, Tobias; Sohlberg, Antti
2018-03-21
Statistical SPECT reconstruction can be very time-consuming especially when compensations for collimator and detector response, attenuation, and scatter are included in the reconstruction. This work proposes an accelerated SPECT reconstruction algorithm based on graphics processing unit (GPU) processing. Ordered subset expectation maximization (OSEM) algorithm with CT-based attenuation modelling, depth-dependent Gaussian convolution-based collimator-detector response modelling, and Monte Carlo-based scatter compensation was implemented using OpenCL. The OpenCL implementation was compared against the existing multi-threaded OSEM implementation running on a central processing unit (CPU) in terms of scatter-to-primary ratios, standardized uptake values (SUVs), and processing speed using mathematical phantoms and clinical multi-bed bone SPECT/CT studies. The difference in scatter-to-primary ratios, visual appearance, and SUVs between GPU and CPU implementations was minor. On the other hand, at its best, the GPU implementation was noticed to be 24 times faster than the multi-threaded CPU version on a normal 128 × 128 matrix size 3 bed bone SPECT/CT data set when compensations for collimator and detector response, attenuation, and scatter were included. GPU SPECT reconstructions show great promise as an every day clinical reconstruction tool.
Nakano, Y; Yamazaki, A; Watanabe, K; Uritani, A; Ogawa, K; Isobe, M
2014-11-01
Neutron monitoring is important to manage safety of fusion experiment facilities because neutrons are generated in fusion reactions. Monte Carlo simulations play an important role in evaluating the influence of neutron scattering from various structures and correcting differences between deuterium plasma experiments and in situ calibration experiments. We evaluated these influences based on differences between the both experiments at Large Helical Device using Monte Carlo simulation code MCNP5. A difference between the both experiments in absolute detection efficiency of the fission chamber between O-ports is estimated to be the biggest of all monitors. We additionally evaluated correction coefficients for some neutron monitors.
Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation
Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe
2015-08-01
Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.
International Nuclear Information System (INIS)
Danielson, Thomas; Sutton, Jonathan E.; Hin, Céline; Virginia Polytechnic Institute and State University; Savara, Aditya
2017-01-01
Lattice based Kinetic Monte Carlo (KMC) simulations offer a powerful simulation technique for investigating large reaction networks while retaining spatial configuration information, unlike ordinary differential equations. However, large chemical reaction networks can contain reaction processes with rates spanning multiple orders of magnitude. This can lead to the problem of “KMC stiffness” (similar to stiffness in differential equations), where the computational expense has the potential to be overwhelmed by very short time-steps during KMC simulations, with the simulation spending an inordinate amount of KMC steps / cpu-time simulating fast frivolous processes (FFPs) without progressing the system (reaction network). In order to achieve simulation times that are experimentally relevant or desired for predictions, a dynamic throttling algorithm involving separation of the processes into speed-ranks based on event frequencies has been designed and implemented with the intent of decreasing the probability of FFP events, and increasing the probability of slow process events -- allowing rate limiting events to become more likely to be observed in KMC simulations. This Staggered Quasi-Equilibrium Rank-based Throttling for Steady-state (SQERTSS) algorithm designed for use in achieving and simulating steady-state conditions in KMC simulations. Lastly, as shown in this work, the SQERTSS algorithm also works for transient conditions: the correct configuration space and final state will still be achieved if the required assumptions are not violated, with the caveat that the sizes of the time-steps may be distorted during the transient period.
Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT
Directory of Open Access Journals (Sweden)
Weinmann Martin
2009-12-01
Full Text Available Abstract Background The purpose of the present study is to compare finite size pencil beam (fsPB and Monte Carlo (MC based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT. Materials and methods A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, density overwrite one phase static CT, average CT of the same patient. Both 6 and 15 MV beam energies were used. The resulting treatment plans were compared by how well they fulfilled the prescribed optimization constraints both for the dose distributions calculated on the static patient models and for the accumulated dose, recalculated with MC on each of 8 CTs of a 4DCT set. Results In the phantom measurements, the MC dose engine showed discrepancies Conclusions It is feasible to employ the MC dose engine for optimization of lung IMSRT and the plans are superior to fsPB. Use of static patient models introduces a bias in the MC dose distribution compared to the 4D MC recalculated dose, but this bias is predictable and therefore MC based optimization on static patient models is considered safe.
Three-dimensional depletion analysis of the axial end of a Takahama fuel rod
International Nuclear Information System (INIS)
DeHart, Mark D.; Gauld, Ian C.; Suyama, Kenya
2008-01-01
Recent developments in spent fuel characterization methods have involved the development of several three-dimensional depletion algorithms based on Monte Carlo methods for the transport solution. However, most validation done to-date has been based on radiochemical assay data for spent fuel samples selected from locations in fuel assemblies that can be easily analyzed using two-dimensional depletion methods. The development of a validation problem that has a truly three-dimensional nature is desirable to thoroughly test the full capabilities of advanced three-dimensional depletion tools. This paper reports on the results of three-dimensional depletion calculations performed using the T6-DEPL depletion sequence of the SCALE 5.1 code system, which couples the KENO-VI Monte Carlo transport solver with the ORIGEN-S depletion and decay code. Analyses are performed for a spent fuel sample that was extracted from within the last two centimeters of the fuel pellet stack. Although a three-dimensional behavior is clearly seen in the results of a number of calculations performed under different assumptions, the uncertainties associated with the position of the sample and its local surroundings render this sample of little value as a validation data point. (authors)
Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation
Directory of Open Access Journals (Sweden)
Yuan Xu
2014-03-01
Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this
Memristive device based on a depletion-type SONOS field effect transistor
Himmel, N.; Ziegler, M.; Mähne, H.; Thiem, S.; Winterfeld, H.; Kohlstedt, H.
2017-06-01
State-of-the-art SONOS (silicon-oxide-nitride-oxide-polysilicon) field effect transistors were operated in a memristive switching mode. The circuit design is a variation of the MemFlash concept and the particular properties of depletion type SONOS-transistors were taken into account. The transistor was externally wired with a resistively shunted pn-diode. Experimental current-voltage curves show analog bipolar switching characteristics within a bias voltage range of ±10 V, exhibiting a pronounced asymmetric hysteresis loop. The experimental data are confirmed by SPICE simulations. The underlying memristive mechanism is purely electronic, which eliminates an initial forming step of the as-fabricated cells. This fact, together with reasonable design flexibility, in particular to adjust the maximum R ON/R OFF ratio, makes these cells attractive for neuromorphic applications. The relative large set and reset voltage around ±10 V might be decreased by using thinner gate-oxides. The all-electric operation principle, in combination with an established silicon manufacturing process of SONOS devices at the Semiconductor Foundry X-FAB, promise reliable operation, low parameter spread and high integration density.
International Nuclear Information System (INIS)
Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Viallet, M.; Folini, D.; Popov, M. V.; Walder, R.
2017-01-01
We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.
Energy Technology Data Exchange (ETDEWEB)
Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Viallet, M. [Astrophysics Group, University of Exeter, Exeter EX4 4QL (United Kingdom); Folini, D.; Popov, M. V.; Walder, R., E-mail: i.baraffe@ex.ac.uk [Ecole Normale Supérieure de Lyon, CRAL, UMR CNRS 5574, F-69364 Lyon Cedex 07 (France)
2017-08-10
We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ∼50 Myr to ∼4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.
Baraffe, I.; Pratt, J.; Goffrey, T.; Constantino, T.; Folini, D.; Popov, M. V.; Walder, R.; Viallet, M.
2017-08-01
We study lithium depletion in low-mass and solar-like stars as a function of time, using a new diffusion coefficient describing extra-mixing taking place at the bottom of a convective envelope. This new form is motivated by multi-dimensional fully compressible, time-implicit hydrodynamic simulations performed with the MUSIC code. Intermittent convective mixing at the convective boundary in a star can be modeled using extreme value theory, a statistical analysis frequently used for finance, meteorology, and environmental science. In this Letter, we implement this statistical diffusion coefficient in a one-dimensional stellar evolution code, using parameters calibrated from multi-dimensional hydrodynamic simulations of a young low-mass star. We propose a new scenario that can explain observations of the surface abundance of lithium in the Sun and in clusters covering a wide range of ages, from ˜50 Myr to ˜4 Gyr. Because it relies on our physical model of convective penetration, this scenario has a limited number of assumptions. It can explain the observed trend between rotation and depletion, based on a single additional assumption, namely, that rotation affects the mixing efficiency at the convective boundary. We suggest the existence of a threshold in stellar rotation rate above which rotation strongly prevents the vertical penetration of plumes and below which rotation has small effects. In addition to providing a possible explanation for the long-standing problem of lithium depletion in pre-main-sequence and main-sequence stars, the strength of our scenario is that its basic assumptions can be tested by future hydrodynamic simulations.
International Nuclear Information System (INIS)
Choi, Sung Hoon; Kwark, Min Su; Shim, Hyung Jin
2012-01-01
As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module
R P, Meenaakshi Sundhari
2018-01-27
Objective: The method to treating cancer that combines light and light-sensitive drugs to selectively destroy tumour cells without harming healthy tissue is called photodynamic therapy (PDT). It requires accurate data for light dose distribution, generated with scalable algorithms. One of the benchmark approaches involves Monte Carlo (MC) simulations. This gives an accurate assessment of light dose distribution, but is very demanding in computation time, which prevents routine application for treatment planning. Methods: In order to resolve this problem, a design for MC simulation based on the gold standard software in biophotonics was implemented with a large modern wavelet based genetic algorithm search (WGAS). Result: The accuracy of the proposed method was compared to that with the standard optimization method using a realistic skin model. The maximum stop band attenuation of the designed LP, HP, BP and BS filters was assessed using the proposed WGAS algorithm as well as with other methods. Conclusion: In this paper, the proposed methodology employs intermediate wavelets which improve the diversification rate of the charged genetic algorithm search and that leads to significant improvement in design effort efficiency. Creative Commons Attribution License
International Nuclear Information System (INIS)
Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.
2014-01-01
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction
Energy Technology Data Exchange (ETDEWEB)
Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Energy Technology Data Exchange (ETDEWEB)
Baba, Justin S [ORNL; John, Dwayne O [ORNL; Koju, Vijay [ORNL
2015-01-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
Directory of Open Access Journals (Sweden)
M. J. Werner
2011-02-01
Full Text Available Data assimilation is routinely employed in meteorology, engineering and computer sciences to optimally combine noisy observations with prior model information for obtaining better estimates of a state, and thus better forecasts, than achieved by ignoring data uncertainties. Earthquake forecasting, too, suffers from measurement errors and partial model information and may thus gain significantly from data assimilation. We present perhaps the first fully implementable data assimilation method for earthquake forecasts generated by a point-process model of seismicity. We test the method on a synthetic and pedagogical example of a renewal process observed in noise, which is relevant for the seismic gap hypothesis, models of characteristic earthquakes and recurrence statistics of large quakes inferred from paleoseismic data records. To address the non-Gaussian statistics of earthquakes, we use sequential Monte Carlo methods, a set of flexible simulation-based methods for recursively estimating arbitrary posterior distributions. We perform extensive numerical simulations to demonstrate the feasibility and benefits of forecasting earthquakes based on data assimilation.
Energy Technology Data Exchange (ETDEWEB)
Oh, Kye Min [KHNP Central Research Institute, Daejeon (Korea, Republic of); Han, Sang Hoon; Park, Jin Hee; Lim, Ho Gon; Yang, Joon Yang [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Heo, Gyun Young [Kyung Hee University, Yongin (Korea, Republic of)
2017-06-15
In Korea, many nuclear power plants operate at a single site based on geographical characteristics, but the population density near the sites is higher than that in other countries. Thus, multiunit accidents are a more important consideration than in other countries and should be addressed appropriately. Currently, there are many issues related to a multiunit probabilistic safety assessment (PSA). One of them is the quantification of a multiunit PSA model. A traditional PSA uses a Boolean manipulation of the fault tree in terms of the minimal cut set. However, such methods have some limitations when rare event approximations cannot be used effectively or a very small truncation limit should be applied to identify accident sequence combinations for a multiunit site. In particular, it is well known that seismic risk in terms of core damage frequency can be overestimated because there are many events that have a high failure probability. In this study, we propose a quantification method based on a Monte Carlo approach for a multiunit PSA model. This method can consider all possible accident sequence combinations in a multiunit site and calculate a more exact value for events that have a high failure probability. An example model for six identical units at a site was also developed and quantified to confirm the applicability of the proposed method.
The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran
Directory of Open Access Journals (Sweden)
Hamed Kargaran
2016-04-01
Full Text Available The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.
The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran
Energy Technology Data Exchange (ETDEWEB)
Kargaran, Hamed, E-mail: h-kargaran@sbu.ac.ir; Minuchehr, Abdolhamid; Zolfaghari, Ahmad [Department of nuclear engineering, Shahid Behesti University, Tehran, 1983969411 (Iran, Islamic Republic of)
2016-04-15
The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL-MODE and SHARED-MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL-MODE and SHARED-MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.
Energy Technology Data Exchange (ETDEWEB)
Nievaart, V A; Daquino, G G; Moss, R L [JRC European Commission, PO Box 2, 1755ZG Petten (Netherlands)
2007-06-15
Boron Neutron Capture Therapy (BNCT) is a bimodal form of radiotherapy for the treatment of tumour lesions. Since the cancer cells in the treatment volume are targeted with {sup 10}B, a higher dose is given to these cancer cells due to the {sup 10}B(n,{alpha}){sup 7}Li reaction, in comparison with the surrounding healthy cells. In Petten (The Netherlands), at the High Flux Reactor, a specially tailored neutron beam has been designed and installed. Over 30 patients have been treated with BNCT in 2 clinical protocols: a phase I study for the treatment of glioblastoma multiforme and a phase II study on the treatment of malignant melanoma. Furthermore, activities concerning the extra-corporal treatment of metastasis in the liver (from colorectal cancer) are in progress. The irradiation beam at the HFR contains both neutrons and gammas that, together with the complex geometries of both patient and beam set-up, demands for very detailed treatment planning calculations. A well designed Treatment Planning System (TPS) should obey the following general scheme: (1) a pre-processing phase (CT and/or MRI scans to create the geometric solid model, cross-section files for neutrons and/or gammas); (2) calculations (3D radiation transport, estimation of neutron and gamma fluences, macroscopic and microscopic dose); (3) post-processing phase (displaying of the results, iso-doses and -fluences). Treatment planning in BNCT is performed making use of Monte Carlo codes incorporated in a framework, which includes also the pre- and post-processing phases. In particular, the glioblastoma multiforme protocol used BNCT{sub r}tpe, while the melanoma metastases protocol uses NCTPlan. In addition, an ad hoc Positron Emission Tomography (PET) based treatment planning system (BDTPS) has been implemented in order to integrate the real macroscopic boron distribution obtained from PET scanning. BDTPS is patented and uses MCNP as the calculation engine. The precision obtained by the Monte Carlo
Fiorini, Francesca; Schreuder, Niek; Van den Heuvel, Frank
2018-02-01
Cyclotron-based pencil beam scanning (PBS) proton machines represent nowadays the majority and most affordable choice for proton therapy facilities, however, their representation in Monte Carlo (MC) codes is more complex than passively scattered proton system- or synchrotron-based PBS machines. This is because degraders are used to decrease the energy from the cyclotron maximum energy to the desired energy, resulting in a unique spot size, divergence, and energy spread depending on the amount of degradation. This manuscript outlines a generalized methodology to characterize a cyclotron-based PBS machine in a general-purpose MC code. The code can then be used to generate clinically relevant plans starting from commercial TPS plans. The described beam is produced at the Provision Proton Therapy Center (Knoxville, TN, USA) using a cyclotron-based IBA Proteus Plus equipment. We characterized the Provision beam in the MC FLUKA using the experimental commissioning data. The code was then validated using experimental data in water phantoms for single pencil beams and larger irregular fields. Comparisons with RayStation TPS plans are also presented. Comparisons of experimental, simulated, and planned dose depositions in water plans show that same doses are calculated by both programs inside the target areas, while penumbrae differences are found at the field edges. These differences are lower for the MC, with a γ(3%-3 mm) index never below 95%. Extensive explanations on how MC codes can be adapted to simulate cyclotron-based scanning proton machines are given with the aim of using the MC as a TPS verification tool to check and improve clinical plans. For all the tested cases, we showed that dose differences with experimental data are lower for the MC than TPS, implying that the created FLUKA beam model is better able to describe the experimental beam. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists
International Nuclear Information System (INIS)
Yeh, C.Y.; Lee, C.C.; Chao, T.C.; Lin, M.H.; Lai, P.A.; Liu, F.H.; Tung, C.J.
2014-01-01
This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0–2.3%). The mean difference for the conformity index was 0.01 (range: 0.0–0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting. - Highlights: ► The patient-based Monte Carlo method serves as a reference standard to verify IMRT doses. ► 3D Dose distributions for NPC patients have been verified by the Monte Carlo method. ► Doses predicted by the Monte Carlo method matched closely with those by the TPS. ► The Monte Carlo method predicted a higher mean dose to the middle ears than the TPS. ► Critical organ doses should be confirmed to avoid overdose to normal organs
The Calculation Of Titanium Buildup Factor Based On Monte Carlo Method
International Nuclear Information System (INIS)
Has, Hengky Istianto; Achmad, Balza; Harto, Andang Widi
2001-01-01
The objective of radioactive-waste container is to reduce radiation emission to the environment. For that purpose, we need material with ability to shield that radiation and last for 10.000 years. Titanium is one of the materials that can be used to make containers. Unfortunately, its buildup factor, which is an importance factor in setting up radiation shielding, has not been calculated. Therefore, the calculations of titanium buildup factor as a function of other parameters is needed. Buildup factor can be determined either experimentally or by simulation. The purpose of this study is to determine titanium buildup factor using simulation program based on Monte Carlo method. Monte Carlo is a stochastic method, therefore is proper to calculate nuclear radiation which naturally has random characteristic. Simulation program also able to give result while experiments can not be performed, because of their limitations.The result of the simulation is, that by increasing titanium thickness the buildup factor number and dosage increase. In contrary If photon energy is higher, then buildup factor number and dosage are lower. The photon energy used in the simulation was ranged from 0.2 MeV to 2.0 MeV with 0.2 MeV step size, while the thickness was ranged from 0.2 cm to 3.0 cm with step size of 0.2 cm. The highest buildup factor number is β = 1.4540 ± 0.047229 at 0.2 MeV photon energy with titanium thickness of 3.0 cm. The lowest is β = 1.0123 ± 0.000650 at 2.0 MeV photon energy with 0.2 cm thickness of titanium. For the dosage buildup factor, the highest dose is β D = 1.3991 ± 0.013999 at 0.2 MeV of the photon energy with a titanium thickness of 3.0 cm and the lowest is β D = 1.0042 ± 0.000597 at 2.0 MeV with titanium thickness of 0.2 cm. For the photon energy and the thickness of titanium used in simulation, buildup factor and dosage buildup factor as a function of photon energy and titanium thickness can be formulated as follow β = 1.1264 e - 0.0855 E e 0 .0584 T
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)
2014-08-15
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
Reporting and analyzing statistical uncertainties in Monte Carlo-based treatment planning
International Nuclear Information System (INIS)
Chetty, Indrin J.; Rosu, Mihaela; Kessler, Marc L.; Fraass, Benedick A.; Haken, Randall K. ten; Kong, Feng-Ming; McShan, Daniel L.
2006-01-01
Purpose: To investigate methods of reporting and analyzing statistical uncertainties in doses to targets and normal tissues in Monte Carlo (MC)-based treatment planning. Methods and Materials: Methods for quantifying statistical uncertainties in dose, such as uncertainty specification to specific dose points, or to volume-based regions, were analyzed in MC-based treatment planning for 5 lung cancer patients. The effect of statistical uncertainties on target and normal tissue dose indices was evaluated. The concept of uncertainty volume histograms for targets and organs at risk was examined, along with its utility, in conjunction with dose volume histograms, in assessing the acceptability of the statistical precision in dose distributions. The uncertainty evaluation tools were extended to four-dimensional planning for application on multiple instances of the patient geometry. All calculations were performed using the Dose Planning Method MC code. Results: For targets, generalized equivalent uniform doses and mean target doses converged at 150 million simulated histories, corresponding to relative uncertainties of less than 2% in the mean target doses. For the normal lung tissue (a volume-effect organ), mean lung dose and normal tissue complication probability converged at 150 million histories despite the large range in the relative organ uncertainty volume histograms. For 'serial' normal tissues such as the spinal cord, large fluctuations exist in point dose relative uncertainties. Conclusions: The tools presented here provide useful means for evaluating statistical precision in MC-based dose distributions. Tradeoffs between uncertainties in doses to targets, volume-effect organs, and 'serial' normal tissues must be considered carefully in determining acceptable levels of statistical precision in MC-computed dose distributions
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
International Nuclear Information System (INIS)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H.
2014-08-01
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
Abstract ID: 197 Monte Carlo simulations of X-ray grating interferometry based imaging systems.
Tessarini, Stefan; Fix, Michael K; Volken, Werner; Frei, Daniel; Stampanoni, Marco F M
2018-01-01
Over the last couple of years the implementation of Monte Carlo (MC) methods of grating based imaging techniques is of increasing interest. Several different approaches were taken to include coherent effects into MC in order to simulate the radiation transport of the image forming procedure. These include full MC using FLUKA [1], which however are only considering monochromatic sources. Alternatively, ray-tracing based MC [2] allow fast simulations with the limitation to provide only qualitative results, i.e. this technique is not suitable for dose calculation in the imaged object. Finally, hybrid models [3] were used allowing quantitative results in reasonable computation time, however only two-dimensional implementations are available. Thus, this work aims to develop a full MC framework for X-ray grating interferometry imaging systems using polychromatic sources suitable for large-scale samples. For this purpose the EGSnrc C++ MC code system is extended to take Snell's law, the optical path length and Huygens principle into account. Thereby the EGSnrc library was modified, e.g. the complex index of refraction has to be assigned to each region depending on the material. The framework is setup to be user-friendly and robust with respect to future updates of the EGSnrc package. These implementations have to be tested using dedicated academic situations. Next steps include the validation by comparisons of measurements for different setups with the corresponding MC simulations. Furthermore, the newly developed implementation will be compared with other simulation approaches. This framework will then serve as bases for dose calculation on CT data and has further potential to investigate the image formation process in grating based imaging systems. Copyright © 2017.
Absorbed dose calculations using mesh-based human phantoms and Monte Carlo methods
International Nuclear Information System (INIS)
Kramer, Richard
2010-01-01
Full text. Health risks attributable to ionizing radiation are considered to be a function of the absorbed dose to radiosensitive organs and tissues of the human body. However, as human tissue cannot express itself in terms of absorbed dose, exposure models have to be used to determine the distribution of absorbed dose throughout the human body. An exposure model, be it physical or virtual, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the absorbed dose to organ and tissues of interest. Female Adult meSH (FASH) and the Male Adult meSH (MASH) virtual phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools. Representing standing adults, FASH and MASH have organ and tissue masses, body height and mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which transports photons, electrons and positrons through arbitrary media. This presentation reports on the development of the FASH and the MASH phantoms and will show dosimetric applications for X-ray diagnosis and for prostate brachytherapy. (author)
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
International Nuclear Information System (INIS)
Kramer, Richard
2011-01-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
Kramer, Richard
2011-08-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
Absorbed dose calculations using mesh-based human phantoms and Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Kramer, Richard [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil)
2010-07-01
Full text. Health risks attributable to ionizing radiation are considered to be a function of the absorbed dose to radiosensitive organs and tissues of the human body. However, as human tissue cannot express itself in terms of absorbed dose, exposure models have to be used to determine the distribution of absorbed dose throughout the human body. An exposure model, be it physical or virtual, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the absorbed dose to organ and tissues of interest. Female Adult meSH (FASH) and the Male Adult meSH (MASH) virtual phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools. Representing standing adults, FASH and MASH have organ and tissue masses, body height and mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which transports photons, electrons and positrons through arbitrary media. This presentation reports on the development of the FASH and the MASH phantoms and will show dosimetric applications for X-ray diagnosis and for prostate brachytherapy. (author)
Optimization of Control Points Number at Coordinate Measurements based on the Monte-Carlo Method
Korolev, A. A.; Kochetkov, A. V.; Zakharov, O. V.
2018-01-01
Improving the quality of products causes an increase in the requirements for the accuracy of the dimensions and shape of the surfaces of the workpieces. This, in turn, raises the requirements for accuracy and productivity of measuring of the workpieces. The use of coordinate measuring machines is currently the most effective measuring tool for solving similar problems. The article proposes a method for optimizing the number of control points using Monte Carlo simulation. Based on the measurement of a small sample from batches of workpieces, statistical modeling is performed, which allows one to obtain interval estimates of the measurement error. This approach is demonstrated by examples of applications for flatness, cylindricity and sphericity. Four options of uniform and uneven arrangement of control points are considered and their comparison is given. It is revealed that when the number of control points decreases, the arithmetic mean decreases, the standard deviation of the measurement error increases and the probability of the measurement α-error increases. In general, it has been established that it is possible to repeatedly reduce the number of control points while maintaining the required measurement accuracy.
Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator
International Nuclear Information System (INIS)
Shypailo, R.J.; Ellis, K.J.
2009-01-01
Many body composition measurement systems are calibrated against a single-sized reference phantom. Prompt-gamma neutron activation (PGNA) provides the only direct measure of total body nitrogen (TBN), an index of the body's lean tissue mass. In PGNA systems, body size influences neutron flux attenuation, induced gamma signal distribution, and counting efficiency. Thus, calibration based on a single-sized phantom could result in inaccurate TBN values. We used Monte Carlo simulations (MCNP-5; Los Alamos National Laboratory) in order to map a system's response to the range of body weights (65-160 kg) and body fat distributions (25-60%) in obese humans. Calibration curves were constructed to derive body-size correction factors relative to a standard reference phantom, providing customized adjustments to account for differences in body habitus of obese adults. The use of MCNP-generated calibration curves should allow for a better estimate of the true changes in lean tissue mass that many occur during intervention programs focused only on weight loss. (author)
dsmcFoam+: An OpenFOAM based direct simulation Monte Carlo solver
White, C.; Borg, M. K.; Scanlon, T. J.; Longshaw, S. M.; John, B.; Emerson, D. R.; Reese, J. M.
2018-03-01
dsmcFoam+ is a direct simulation Monte Carlo (DSMC) solver for rarefied gas dynamics, implemented within the OpenFOAM software framework, and parallelised with MPI. It is open-source and released under the GNU General Public License in a publicly available software repository that includes detailed documentation and tutorial DSMC gas flow cases. This release of the code includes many features not found in standard dsmcFoam, such as molecular vibrational and electronic energy modes, chemical reactions, and subsonic pressure boundary conditions. Since dsmcFoam+ is designed entirely within OpenFOAM's C++ object-oriented framework, it benefits from a number of key features: the code emphasises extensibility and flexibility so it is aimed first and foremost as a research tool for DSMC, allowing new models and test cases to be developed and tested rapidly. All DSMC cases are as straightforward as setting up any standard OpenFOAM case, as dsmcFoam+ relies upon the standard OpenFOAM dictionary based directory structure. This ensures that useful pre- and post-processing capabilities provided by OpenFOAM remain available even though the fully Lagrangian nature of a DSMC simulation is not typical of most OpenFOAM applications. We show that dsmcFoam+ compares well to other well-known DSMC codes and to analytical solutions in terms of benchmark results.
A comprehensive revisit of the ρ meson with improved Monte-Carlo based QCD sum rules
Wang, Qi-Nan; Zhang, Zhu-Feng; Steele, T. G.; Jin, Hong-Ying; Huang, Zhuo-Ran
2017-07-01
We improve the Monte-Carlo based QCD sum rules by introducing the rigorous Hölder-inequality-determined sum rule window and a Breit-Wigner type parametrization for the phenomenological spectral function. In this improved sum rule analysis methodology, the sum rule analysis window can be determined without any assumptions on OPE convergence or the QCD continuum. Therefore, an unbiased prediction can be obtained for the phenomenological parameters (the hadronic mass and width etc.). We test the new approach in the ρ meson channel with re-examination and inclusion of α s corrections to dimension-4 condensates in the OPE. We obtain results highly consistent with experimental values. We also discuss the possible extension of this method to some other channels. Supported by NSFC (11175153, 11205093, 11347020), Open Foundation of the Most Important Subjects of Zhejiang Province, and K. C. Wong Magna Fund in Ningbo University, TGS is Supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), Z. F. Zhang and Z. R. Huang are Grateful to the University of Saskatchewan for its Warm Hospitality
Voxel-based Monte Carlo simulation of X-ray imaging and spectroscopy experiments
International Nuclear Information System (INIS)
Bottigli, U.; Brunetti, A.; Golosio, B.; Oliva, P.; Stumbo, S.; Vincze, L.; Randaccio, P.; Bleuet, P.; Simionovici, A.; Somogyi, A.
2004-01-01
A Monte Carlo code for the simulation of X-ray imaging and spectroscopy experiments in heterogeneous samples is presented. The energy spectrum, polarization and profile of the incident beam can be defined so that X-ray tube systems as well as synchrotron sources can be simulated. The sample is modeled as a 3D regular grid. The chemical composition and density is given at each point of the grid. Photoelectric absorption, fluorescent emission, elastic and inelastic scattering are included in the simulation. The core of the simulation is a fast routine for the calculation of the path lengths of the photon trajectory intersections with the grid voxels. The voxel representation is particularly useful for samples that cannot be well described by a small set of polyhedra. This is the case of most naturally occurring samples. In such cases, voxel-based simulations are much less expensive in terms of computational cost than simulations on a polygonal representation. The efficient scheme used for calculating the path lengths in the voxels and the use of variance reduction techniques make the code suitable for the detailed simulation of complex experiments on generic samples in a relatively short time. Examples of applications to X-ray imaging and spectroscopy experiments are discussed
IVF cycle cost estimation using Activity Based Costing and Monte Carlo simulation.
Cassettari, Lucia; Mosca, Marco; Mosca, Roberto; Rolando, Fabio; Costa, Mauro; Pisaturo, Valerio
2016-03-01
The Authors present a new methodological approach in stochastic regime to determine the actual costs of an healthcare process. The paper specifically shows the application of the methodology for the determination of the cost of an Assisted reproductive technology (ART) treatment in Italy. The reason of this research comes from the fact that deterministic regime is inadequate to implement an accurate estimate of the cost of this particular treatment. In fact the durations of the different activities involved are unfixed and described by means of frequency distributions. Hence the need to determine in addition to the mean value of the cost, the interval within which it is intended to vary with a known confidence level. Consequently the cost obtained for each type of cycle investigated (in vitro fertilization and embryo transfer with or without intracytoplasmic sperm injection), shows tolerance intervals around the mean value sufficiently restricted as to make the data obtained statistically robust and therefore usable also as reference for any benchmark with other Countries. It should be noted that under a methodological point of view the approach was rigorous. In fact it was used both the technique of Activity Based Costing for determining the cost of individual activities of the process both the Monte Carlo simulation, with control of experimental error, for the construction of the tolerance intervals on the final result.
Learning Algorithm of Boltzmann Machine Based on Spatial Monte Carlo Integration Method
Directory of Open Access Journals (Sweden)
Muneki Yasuda
2018-04-01
Full Text Available The machine learning techniques for Markov random fields are fundamental in various fields involving pattern recognition, image processing, sparse modeling, and earth science, and a Boltzmann machine is one of the most important models in Markov random fields. However, the inference and learning problems in the Boltzmann machine are NP-hard. The investigation of an effective learning algorithm for the Boltzmann machine is one of the most important challenges in the field of statistical machine learning. In this paper, we study Boltzmann machine learning based on the (first-order spatial Monte Carlo integration method, referred to as the 1-SMCI learning method, which was proposed in the author’s previous paper. In the first part of this paper, we compare the method with the maximum pseudo-likelihood estimation (MPLE method using a theoretical and a numerical approaches, and show the 1-SMCI learning method is more effective than the MPLE. In the latter part, we compare the 1-SMCI learning method with other effective methods, ratio matching and minimum probability flow, using a numerical experiment, and show the 1-SMCI learning method outperforms them.
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K.; Siegel, Andrew R.
2017-04-16
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.
Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy
Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.
2018-01-01
This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Unfolding an under-determined neutron spectrum using genetic algorithm based Monte Carlo
International Nuclear Information System (INIS)
Suman, V.; Sarkar, P.K.
2011-01-01
Spallation in addition to the other photon-neutron reactions in target materials and different components in accelerators may result in production of huge amount of energetic protons which further leads to the production of neutron and contributes to the main component of the total dose. For dosimetric purposes in accelerator facilities the detector measurements doesn't provide directly the actual neutron flux values but a cumulative picture. To obtain Neutron spectrum from the measured data, response functions of the measuring instrument together with the measurements are used into many unfolding techniques which are frequently used for unfolding the hidden spectral information. Here we discuss a genetic algorithm based unfolding technique which is in the process of development. Genetic Algorithm is a stochastic method based on natural selection, which mimics Darwinian theory of survival of the best. The above said method has been tested to unfold the neutron spectra obtained from a reaction carried out at an accelerator facility, with energetic carbon ions on thick silver target along with its respective neutron response of BC501A liquid scintillation detector. The problem dealt here is under-determined where the number of measurements is less than the required energy bin information. The results so obtained were compared with those obtained using the established unfolding code FERDOR, which unfolds data for completely determined problems. It is seen that the genetic algorithm based solution has a reasonable match with the results of FERDOR, when smoothening carried out by Monte Carlo is taken into consideration. This method appears to be a promising candidate for unfolding neutron spectrum in cases of under-determined and over-determined, where measurements are more. The method also has advantages of flexibility, computational simplicity and works well without need of any initial guess spectrum. (author)
Liu, Jianqiao; Lu, Yiting; Cui, Xiao; Jin, Guohua; Zhai, Zhaoxia
2016-03-01
The effects of depletion layer width on the semiconductor gas sensors were investigated based on the gradient-distributed oxygen vacancy model, which provided numerical descriptions for the sensor properties. The potential barrier height, sensor resistance, and response to target gases were simulated to reveal their dependences on the depletion layer width. According to the simulation, it was possible to improve the sensor response by enlarging the width of depletion layer without changing the resistance of the gas sensor under the special circumstance. The different performances between resistance and response could provide a bright expectation that the design and fabrication of gas sensing devices could be economized. The simulation results were validated by the experimental performances of SnO2 thin film gas sensors, which were prepared by the sol-gel technique. The dependences of sensor properties on depletion layer width were observed to be in agreement with the simulations.
Wang, T.; Barbero, M.; Berdalovic, I.; Bespin, C.; Bhat, S.; Breugnon, P.; Caicedo, I.; Cardella, R.; Chen, Z.; Degerli, Y.; Egidos, N.; Godiot, S.; Guilloux, F.; Hemperek, T.; Hirono, T.
2017-01-01
Depleted monolithic active pixel sensors (DMAPS), which exploit high voltage and/or high resistivity add-ons of modern CMOS technologies to achieve substantial depletion in the sensing volume, have proven to have high radiation tolerance towards the requirements of ATLAS in the high-luminosity LHC era. Depleted fully monolithic CMOS pixels with fast readout architectures are currently being developed as promising candidates for the outer pixel layers of the future ATLAS Inner Tracker, which w...
International Nuclear Information System (INIS)
Weathers, J.B.; Luck, R.; Weathers, J.W.
2009-01-01
The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.
Directory of Open Access Journals (Sweden)
Vahid Moslemi
2011-03-01
Full Text Available Introduction: In brachytherapy, radioactive sources are placed close to the tumor, therefore, small changes in their positions can cause large changes in the dose distribution. This emphasizes the need for computerized treatment planning. The usual method for treatment planning of cervix brachytherapy uses conventional radiographs in the Manchester system. Nowadays, because of their advantages in locating the source positions and the surrounding tissues, CT and MRI images are replacing conventional radiographs. In this study, we used CT images in Monte Carlo based dose calculation for brachytherapy treatment planning, using an interface software to create the geometry file required in the MCNP code. The aim of using the interface software is to facilitate and speed up the geometry set-up for simulations based on the patient’s anatomy. This paper examines the feasibility of this method in cervix brachytherapy and assesses its accuracy and speed. Material and Methods: For dosimetric measurements regarding the treatment plan, a pelvic phantom was made from polyethylene in which the treatment applicators could be placed. For simulations using CT images, the phantom was scanned at 120 kVp. Using an interface software written in MATLAB, the CT images were converted into MCNP input file and the simulation was then performed. Results: Using the interface software, preparation time for the simulations of the applicator and surrounding structures was approximately 3 minutes; the corresponding time needed in the conventional MCNP geometry entry being approximately 1 hour. The discrepancy in the simulated and measured doses to point A was 1.7% of the prescribed dose. The corresponding dose differences between the two methods in rectum and bladder were 3.0% and 3.7% of the prescribed dose, respectively. Comparing the results of simulation using the interface software with those of simulation using the standard MCNP geometry entry showed a less than 1
Monte-Carlo Simulation for PDC-Based Optical CDMA System
Directory of Open Access Journals (Sweden)
FAHIM AZIZ UMRANI
2010-10-01
Full Text Available This paper presents the Monte-Carlo simulation of Optical CDMA (Code Division Multiple Access systems, and analyse its performance in terms of the BER (Bit Error Rate. The spreading sequence chosen for CDMA is Perfect Difference Codes. Furthermore, this paper derives the expressions of noise variances from first principles to calibrate the noise for both bipolar (electrical domain and unipolar (optical domain signalling required for Monte-Carlo simulation. The simulated results conform to the theory and show that the receiver gain mismatch and splitter loss at the transceiver degrades the system performance.
Directory of Open Access Journals (Sweden)
Joko Siswantoro
2014-11-01
Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.
Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2015-04-07
Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 10(6) particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 10(5) particles per beamlet. Correspondingly, the computation
Zhang, Dezhong; Liu, Chunyu; Xu, Ruiliang; Yin, Bo; Chen, Yu; Zhang, Xindong; Gao, Fengli; Ruan, Shengping
2017-09-01
A novel dark self-depleting ultraviolet (UV) photodetector based on a TiO2/NiO pn heterojunction was demonstrated and exhibited lower dark current (I dark) and noise. Both the NiO layer and Ni/Au composite electrode were fabricated by a smart, one-step oxidation method which was first employed in the fabrication of the UV photodetector. In dark, the depleted pn heterojunction structure effectively reduced the majority carrier density in TiO2/NiO films, demonstrating a high resistance state and contributing to a lower I dark of 0.033 nA, two orders of magnitude lower than that of the single-material devices. Under UV illumination, the interface self-depleting effect arising from the dissociation and accumulation of photogenerated carriers was eliminated, ensuring loss-free responsivity (R) and a remarkable specific detectivity (D*) of 1.56 × 1014 cm Hz1/2 W-1 for the optimal device. The device with the structure of ITO/TiO2/NiO/Au was measured to prove the mechanisms of interface self-depleting in dark and elimination of the depletion layer under UV illumination. Meanwhile, shortened decay time was achieved in the pn heterojunction UV photodetector. This suggests that the self-depleting devices possess the potential to further enhance photodetection performance.
Optimization of FIBMOS Through 2D Silvaco ATLAS and 2D Monte Carlo Particle-based Device Simulations
Kang, J.; He, X.; Vasileska, D.; Schroder, D. K.
2001-01-01
Focused Ion Beam MOSFETs (FIBMOS) demonstrate large enhancements in core device performance areas such as output resistance, hot electron reliability and voltage stability upon channel length or drain voltage variation. In this work, we describe an optimization technique for FIBMOS threshold voltage characterization using the 2D Silvaco ATLAS simulator. Both ATLAS and 2D Monte Carlo particle-based simulations were used to show that FIBMOS devices exhibit enhanced current drive ...
Test Population Selection from Weibull-Based, Monte Carlo Simulations of Fatigue Life
Vlcek, Brian L.; Zaretsky, Erwin V.; Hendricks, Robert C.
2012-01-01
Fatigue life is probabilistic and not deterministic. Experimentally establishing the fatigue life of materials, components, and systems is both time consuming and costly. As a result, conclusions regarding fatigue life are often inferred from a statistically insufficient number of physical tests. A proposed methodology for comparing life results as a function of variability due to Weibull parameters, variability between successive trials, and variability due to size of the experimental population is presented. Using Monte Carlo simulation of randomly selected lives from a large Weibull distribution, the variation in the L10 fatigue life of aluminum alloy AL6061 rotating rod fatigue tests was determined as a function of population size. These results were compared to the L10 fatigue lives of small (10 each) populations from AL2024, AL7075 and AL6061. For aluminum alloy AL6061, a simple algebraic relationship was established for the upper and lower L10 fatigue life limits as a function of the number of specimens failed. For most engineering applications where less than 30 percent variability can be tolerated in the maximum and minimum values, at least 30 to 35 test samples are necessary. The variability of test results based on small sample sizes can be greater than actual differences, if any, that exists between materials and can result in erroneous conclusions. The fatigue life of AL2024 is statistically longer than AL6061 and AL7075. However, there is no statistical difference between the fatigue lives of AL6061 and AL7075 even though AL7075 had a fatigue life 30 percent greater than AL6061.
International Nuclear Information System (INIS)
Fonseca, Gabriel Paiva; Yoriyaz, Hélio; Landry, Guillaume; White, Shane; Reniers, Brigitte; Verhaegen, Frank; D’Amours, Michel; Beaulieu, Luc
2014-01-01
Accounting for brachytherapy applicator attenuation is part of the recommendations from the recent report of AAPM Task Group 186. To do so, model based dose calculation algorithms require accurate modelling of the applicator geometry. This can be non-trivial in the case of irregularly shaped applicators such as the Fletcher Williamson gynaecological applicator or balloon applicators with possibly irregular shapes employed in accelerated partial breast irradiation (APBI) performed using electronic brachytherapy sources (EBS). While many of these applicators can be modelled using constructive solid geometry (CSG), the latter may be difficult and time-consuming. Alternatively, these complex geometries can be modelled using tessellated geometries such as tetrahedral meshes (mesh geometries (MG)). Recent versions of Monte Carlo (MC) codes Geant4 and MCNP6 allow for the use of MG. The goal of this work was to model a series of applicators relevant to brachytherapy using MG. Applicators designed for 192 Ir sources and 50 kV EBS were studied; a shielded vaginal applicator, a shielded Fletcher Williamson applicator and an APBI balloon applicator. All applicators were modelled in Geant4 and MCNP6 using MG and CSG for dose calculations. CSG derived dose distributions were considered as reference and used to validate MG models by comparing dose distribution ratios. In general agreement within 1% for the dose calculations was observed for all applicators between MG and CSG and between codes when considering volumes inside the 25% isodose surface. When compared to CSG, MG required longer computation times by a factor of at least 2 for MC simulations using the same code. MCNP6 calculation times were more than ten times shorter than Geant4 in some cases. In conclusion we presented methods allowing for high fidelity modelling with results equivalent to CSG. To the best of our knowledge MG offers the most accurate representation of an irregular APBI balloon applicator. (paper)
Monte Carlo based protocol for cell survival and tumour control probability in BNCT.
Ye, S J
1999-02-01
A mathematical model to calculate the theoretical cell survival probability (nominally, the cell survival fraction) is developed to evaluate preclinical treatment conditions for boron neutron capture therapy (BNCT). A treatment condition is characterized by the neutron beam spectra, single or bilateral exposure, and the choice of boron carrier drug (boronophenylalanine (BPA) or boron sulfhydryl hydride (BSH)). The cell survival probability defined from Poisson statistics is expressed with the cell-killing yield, the 10B(n,alpha)7Li reaction density, and the tolerable neutron fluence. The radiation transport calculation from the neutron source to tumours is carried out using Monte Carlo methods: (i) reactor-based BNCT facility modelling to yield the neutron beam library at an irradiation port; (ii) dosimetry to limit the neutron fluence below a tolerance dose (10.5 Gy-Eq); (iii) calculation of the 10B(n,alpha)7Li reaction density in tumours. A shallow surface tumour could be effectively treated by single exposure producing an average cell survival probability of 10(-3)-10(-5) for probable ranges of the cell-killing yield for the two drugs, while a deep tumour will require bilateral exposure to achieve comparable cell kills at depth. With very pure epithermal beams eliminating thermal, low epithermal and fast neutrons, the cell survival can be decreased by factors of 2-10 compared with the unmodified neutron spectrum. A dominant effect of cell-killing yield on tumour cell survival demonstrates the importance of choice of boron carrier drug. However, these calculations do not indicate an unambiguous preference for one drug, due to the large overlap of tumour cell survival in the probable ranges of the cell-killing yield for the two drugs. The cell survival value averaged over a bulky tumour volume is used to predict the overall BNCT therapeutic efficacy, using a simple model of tumour control probability (TCP).
Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination.
Liu, B; Xu, J; Liu, T; Ouyang, X
2012-10-01
To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or γ-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a (252)Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D-D neutron generator can create neutrons at up to 10(13) n s(-1) with current technology. All these enable an effective and low-cost method of killing anthrax spores. There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g (252)Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D-D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D-D neutron generator output >10(13) n s(-1) should be attainable in the near future. This indicates that we could use a D-D neutron generator to sterilise anthrax contamination within several seconds.
Hogendoorn, Carmen; Roszczenko-Jasińska, Paula; Martinez-Gomez, N Cecilia; de Graaff, Johann; Grassl, Patrick; Pol, Arjan; Op den Camp, Huub J M; Daumann, Lena J
2018-02-16
Recently, methanotrophic and methylotrophic bacteria were found to utilize rare earth elements (REE). To monitor the REE-content in culture media of these bacteria we have developed a rapid screening method using the Arsenazo III (AS III) dye for spectrophotometric REE-detection in the low μM (0.1-10 μM) range. We designed this assay to follow La III and Eu III depletion from the culture medium by the acidophilic verrucomicrobial methanotroph Methylacidiphilum fumariolicum SolV. The assay can also be modified to screen the uptake of other REE such as Pr III or to monitor the depletion of La III from growth media in neutrophilic methylotrophs such as Methylobacterium extorquens AM1. The AS III assay presents a convenient and fast detection method for REE levels in culture media and is a sensitive alternative to inductively coupled plasma mass spectrometry (ICP-MS) or atomic absorption spectroscopy (AAS). Importance REE-dependent bacterial metabolism is a quickly emerging field and while the importance of REE for both methanotropic and methylotrophic bacteria is now firmly established, many important questions, such as how these insoluble elements are taken up into cells, are still unanswered. Here, an Arsenazo III dye based assay has been developed for fast, specific and sensitive determination of REE content in different culture media. This assay presents a useful tool for optimizing cultivation protocols as well as for routine REE monitoring during bacterial growth without the need for specialized analytical instrumentation. Furthermore, this assay has the potential to promote the discovery of other REE-dependent microorganisms and can help to elucidate the mechanisms for acquisition of REE by methanotrophic and methylotrophic bacteria. Copyright © 2018 Hogendoorn et al.
International Nuclear Information System (INIS)
Han Jingru; Chen Yixue; Yuan Longjun
2013-01-01
The Monte Carlo (MC) and discrete ordinates (SN) are the commonly used methods in the design of radiation shielding. Monte Carlo method is able to treat the geometry exactly, but time-consuming in dealing with the deep penetration problem. The discrete ordinate method has great computational efficiency, but it is quite costly in computer memory and it suffers from ray effect. Single discrete ordinates method or single Monte Carlo method has limitation in shielding calculation for large complex nuclear facilities. In order to solve the problem, the Monte Carlo and discrete ordinates bidirectional coupling method is developed. The bidirectional coupling method is implemented in the interface program to transfer the particle probability distribution of MC and angular flux of discrete ordinates. The coupling method combines the advantages of MC and SN. The test problems of cartesian and cylindrical coordinate have been calculated by the coupling methods. The calculation results are performed with comparison to MCNP and TORT and satisfactory agreements are obtained. The correctness of the program is proved. (authors)
Monte Carlo capabilities of the SCALE code system
International Nuclear Information System (INIS)
Rearden, B.T.; Petrie, L.M.; Peplow, D.E.; Bekar, K.B.; Wiarda, D.; Celik, C.; Perfetti, C.M.; Ibrahim, A.M.; Dunn, M.E.; Hart, S.W.D.
2013-01-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a 'plug-and-play' framework that includes three deterministic and three Monte Carlo radiation transport solvers (KENO, MAVRIC, TSUNAMI) that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2. (authors)
Study on adaptive BTT reentry speed depletion guidance law based on BP neural network
Zheng, Zongzhun; Wang, Yongji; Wu, Hao
2007-11-01
Reentry guidance is one of the key technologies in hypersonic vehicle research field. In addition to the constraints on its final position coordinates, the vehicle must also impact the target from a specified direction with high precision. And therefore the adaptability of guidance law is critical to control the velocity of hypersonic vehicle and firing accuracy properly in different surroundings of large airspace. In this paper, a new adaptive guidance strategy based on Back Propagation (BP) neural network for the reentry mission of a generic hypersonic vehicle is presented. Depending on the nicer self-learn ability of BP neural network, the guidance law considers the influence of biggish mis-modeling of aerodynamics, structure error and other initial disturbances on the flight capability of vehicle. Consequently, terminal position accuracy and velocity are guaranteed, while many constraints are satisfied. Numerical simulation results clearly bring out the fact that the proposed reentry guidance law based on BP neural network is rational and effective.
International Nuclear Information System (INIS)
Karriem, Z.; Ivanov, K.; Zamonsky, O.
2011-01-01
This paper presents work that has been performed to develop an integrated Monte Carlo- Deterministic transport methodology in which the two methods make use of exactly the same general geometry and multigroup nuclear data. The envisioned application of this methodology is in reactor lattice physics methods development and shielding calculations. The methodology will be based on the Method of Long Characteristics (MOC) and the Monte Carlo N-Particle Transport code MCNP5. Important initial developments pertaining to ray tracing and the development of an MOC flux solver for the proposed methodology are described. Results showing the viability of the methodology are presented for two 2-D general geometry transport problems. The essential developments presented is the use of MCNP as geometry construction and ray tracing tool for the MOC, verification of the ray tracing indexing scheme that was developed to represent the MCNP geometry in the MOC and the verification of the prototype 2-D MOC flux solver. (author)
Directory of Open Access Journals (Sweden)
Jia-Cheng Yu
2018-02-01
Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.
Barnes, Diana Hart
2000-11-01
Background and pollution trends and cycles of fourteen trace gases over the Northeastern U.S. are inferred from continuous atmospheric observations at the Harvard Forest research station located in Petersham, Massachusetts. This site receives background `clean' air from the northwest (Canada) and `dirty' polluted air from the southwest (New York City-Washington, D.C. corridor). Mixing ratios of gases regulated by the Montreal Protocol or other policies (CO, PCE, CFC11, CFC12, CFC113, CH 3CCl3, CCl4, and Halon-1211) and of those not subject to restrictions (H2, CH4, CHCl3, TCE, N2O, and SF6) were measured over the three-year period, 1996 to 1998, every 24 minutes by a fully automated gas chromatographic instrument with electron capture detectors. Evidence for polar vortex venting is found consistently in the month of June of the background seasonal cycles. The ratio of CO and PCE enhancements borne on southwesterly winds are in excellent agreement with county-level EPA and sales-based inventories for the New York City-Washington, D.C. region. From this firm footing, we use CO and PCE as reference compounds to determine the urban/industrial source strengths for the other species. A broad historical and geographic study of emissions reveals that the international treaty has by and large been a success. Locally, despite the passing of the 1996 Montreal Protocol ban, only emissions of CFC12 and CH3CCl3 are abating. Though source strengths are waning, the sources are not spent and continued releases to the atmosphere may be expected for some years to come. For CH3CCl3, whose rate of decline is central to our understanding of atmospheric processes, we estimate that absolute concentrations may persist until around the year 2010. The long-term high frequency time series of hydrogen provided here represents the first such data set of its kind. The H2 diurnal cycle is established and explained in terms of its sources and sinks. The ratio of H2 to CO in pollution plumes is
Directory of Open Access Journals (Sweden)
Ihab Abd-Elrahman
Full Text Available Cardiovascular disease is the leading cause of death worldwide, mainly due to an increasing prevalence of atherosclerosis characterized by inflammatory plaques. Plaques with high levels of macrophage infiltration are considered "vulnerable" while those that do not have significant inflammation are considered stable; cathepsin protease activity is highly elevated in macrophages of vulnerable plaques and contributes to plaque instability. Establishing novel tools for non-invasive molecular imaging of macrophages in plaques could aid in preclinical studies and evaluation of therapeutics. Furthermore, compounds that reduce the macrophage content within plaques should ultimately impact care for this disease.We have applied quenched fluorescent cathepsin activity-based probes (ABPs to a murine atherosclerosis model and evaluated their use for in vivo imaging using fluorescent molecular tomography (FMT, as well as ex vivo fluorescence imaging and fluorescent microscopy. Additionally, freshly dissected human carotid plaques were treated with our potent cathepsin inhibitor and macrophage apoptosis was evaluated by fluorescent microscopy.We demonstrate that our ABPs accurately detect murine atherosclerotic plaques non-invasively, identifying cathepsin activity within plaque macrophages. In addition, our cathepsin inhibitor selectively induced cell apoptosis of 55%±10% of the macrophage within excised human atherosclerotic plaques.Cathepsin ABPs present a rapid diagnostic tool for macrophage detection in atherosclerotic plaque. Our inhibitor confirms cathepsin-targeting as a promising approach to treat atherosclerotic plaque inflammation.
Development of a hybrid multi-scale phantom for Monte-Carlo based internal dosimetry
International Nuclear Information System (INIS)
Marcatili, S.; Villoing, D.; Bardies, M.
2015-01-01
Full text of publication follows. Aim: in recent years several phantoms were developed for radiopharmaceutical dosimetry in clinical and preclinical settings. Voxel-based models (Zubal, Max/Fax, ICRP110) were developed to reach a level of realism that could not be achieved by mathematical models. In turn, 'hybrid' models (XCAT, MOBY/ROBY, Mash/Fash) allow a further degree of versatility by offering the possibility to finely tune each model according to various parameters. However, even 'hybrid' models require the generation of a voxel version for Monte-Carlo modeling of radiation transport. Since absorbed dose simulation time is strictly related to geometry spatial sampling, a compromise should be made between phantom realism and simulation speed. This trade-off leads on one side in an overestimation of the size of small radiosensitive structures such as the skin or hollow organs' walls, and on the other hand to unnecessarily detailed voxellization of large, homogeneous structures. The Aim of this work is to develop a hybrid multi-resolution phantom model for Geant4 and Gate, to better characterize energy deposition in small structures while preserving reasonable computation times. Materials and Methods: we have developed a pipeline for the conversion of preexisting phantoms into a multi-scale Geant4 model. Meshes of each organ are created from raw binary images of a phantom and then voxellized to the smallest spatial sampling required by the user. The user can then decide to re-sample the internal part of each organ, while leaving a layer of smallest voxels at the edge of the organ. In this way, the realistic shape of the organ is maintained while reducing the voxel number in the inner part. For hollow organs, the wall is always modeled using the smallest voxel sampling. This approach allows choosing different voxel resolutions for each organ according to a specific application. Results: preliminary results show that it is possible to
Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations
DEFF Research Database (Denmark)
Pettersen, E. E.; Demazire, C.; Jareteg, K.
2015-01-01
equivalent problems nevertheless requires the possibility to modify the macroscopic cross-sections, and we use the work of Kuijper, van der Marck and Hogenbirk to define group-wise macroscopic cross-sections in MCNP [1]. The method is illustrated in this paper at a frequency of 1 Hz, for which only the real......This paper deals with the development of a novel method for performing Monte Carlo calculations of the effect, on the neutron flux, of stationary fluctuations in macroscopic cross-sections. The basic principle relies on the formulation of two equivalent problems in the frequency domain: one...... stationary dynamic calculations, the presented method does not require any modification of the Monte Carlo code....
Directory of Open Access Journals (Sweden)
N Heidarloo
2017-08-01
Full Text Available Intraoperative electron radiotherapy is one of the radiotherapy methods that delivers a high single fraction of radiation dose to the patient in one session during the surgery. Beam shaper applicator is one of the applicators that is recently employed with this radiotherapy method. This applicator has a considerable application in treatment of large tumors. In this study, the dosimetric characteristics of the electron beam produced by LIAC intraoperative radiotherapy accelerator in conjunction with this applicator have been evaluated through Monte Carlo simulation by MCNP code. The results showed that the electron beam produced by the beam shaper applicator would have the desirable dosimetric characteristics, so that the mentioned applicator can be considered for clinical purposes. Furthermore, the good agreement between the results of simulation and practical dosimetry, confirms the applicability of Monte Carlo method in determining the dosimetric parameters of electron beam intraoperative radiotherapy
Simulation based sequential Monte Carlo methods for discretely observed Markov processes
Neal, Peter
2014-01-01
Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...
Phase constitution of Ni-based quaternary alloys studied by Monte Carlo simulation
Czech Academy of Sciences Publication Activity Database
Buršík, Jiří
2002-01-01
Roč. 147, 1-2 (2002), s. 162-165 ISSN 0010-4655. [Computational Modeling and Simulation of Complex Systems, Europhysics Conference on Computational Physics (CCP 2001). Aachen, 05.09.2001-08.09.2001] R&D Projects: GA ČR GA202/01/0383 Institutional research plan: CEZ:AV0Z2041904 Keywords : Monte Carlo simulation * ordering * pairwise interaction Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.204, year: 2002
Design and evaluation of a Monte Carlo based model of an orthovoltage treatment system
International Nuclear Information System (INIS)
Penchev, Petar; Maeder, Ulf; Fiebich, Martin; Zink, Klemens; University Hospital Marburg
2015-01-01
The aim of this study was to develop a flexible framework of an orthovoltage treatment system capable of calculating and visualizing dose distributions in different phantoms and CT datasets. The framework provides a complete set of various filters, applicators and X-ray energies and therefore can be adapted to varying studies or be used for educational purposes. A dedicated user friendly graphical interface was developed allowing for easy setup of the simulation parameters and visualization of the results. For the Monte Carlo simulations the EGSnrc Monte Carlo code package was used. Building the geometry was accomplished with the help of the EGSnrc C++ class library. The deposited dose was calculated according to the KERMA approximation using the track-length estimator. The validation against measurements showed a good agreement within 4-5% deviation, down to depths of 20% of the depth dose maximum. Furthermore, to show its capabilities, the validated model was used to calculate the dose distribution on two CT datasets. Typical Monte Carlo calculation time for these simulations was about 10 minutes achieving an average statistical uncertainty of 2% on a standard PC. However, this calculation time depends strongly on the used CT dataset, tube potential, filter material/thickness and applicator size.
Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.
2016-03-01
We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.
Sampling-based nuclear data uncertainty quantification for continuous energy Monte-Carlo codes
International Nuclear Information System (INIS)
Zhu, T.
2015-01-01
Research on the uncertainty of nuclear data is motivated by practical necessity. Nuclear data uncertainties can propagate through nuclear system simulations into operation and safety related parameters. The tolerance for uncertainties in nuclear reactor design and operation can affect the economic efficiency of nuclear power, and essentially its sustainability. The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. For example, only recently in 2011, the capability of calculating continuous energy κ eff sensitivity to nuclear data was demonstrated in certain M-C codes by using the method of iterated fission probability. The methodology developed during this PhD research is fundamentally different from the conventional S/U approach: nuclear data are treated as random variables and sampled in accordance to presumed probability distributions. When sampled nuclear data are used in repeated model calculations, the output variance is attributed to the collective uncertainties of nuclear data. The NUSS (Nuclear data Uncertainty Stochastic Sampling) tool is based on this sampling approach and implemented to work with MCNPX’s ACE format of nuclear data, which also gives NUSS compatibility with MCNP and SERPENT M-C codes. In contrast, multigroup uncertainties are used for the sampling of ACE-formatted pointwise-energy nuclear data in a groupwise manner due to the more limited quantity and quality of nuclear data uncertainties. Conveniently, the usage of multigroup nuclear data uncertainties allows consistent comparison between NUSS and other methods (both S/U and sampling-based) that employ the same
Depletion analysis of the UMLRR reactor core using MCNP6
Odera, Dim Udochukwu
Accurate knowledge of the neutron flux and temporal nuclide inventory in reactor physics calculations is necessary for a variety of application in nuclear engineering such as criticality safety, safeguards, and spent fuel storage. The Monte Carlo N- Particle (MCNP6) code with integrated buildup depletion code (CINDER90) provides a high-fidelity tool that can be used to perform 3D, full core simulation to evaluate fissile material utilization, and nuclide inventory calculations as a function of burnup. The University of Massachusetts Lowell Research Reactor (UMLRR) reactor has been modeled with the deterministic based code, VENTURE and with an older version of MCNP (MCNP5). The MIT developed MCODE (MCNP ORIGEN DEPLETION CODE) was used previously to perform some limited depletion calculations. This work chronicles the use of MCNP6, released in June 2013, to perform coupled neutronics and depletion calculation. The results are compared to previously benchmarked results. Furthermore, the code is used to determine the ratio of fission products 134Cs and 137Cs (burnup indicators), and the resultant ratio is compared to the burnup of the UMLRR.
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun
2015-10-01
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz
2014-05-01
Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape
International Nuclear Information System (INIS)
Stjernberg, J; Antti, M-L; Ion, J C; Lindblom, B
2011-01-01
To investigate the mechanisms underlying the depletion of mullite/corundum-based refractory bricks used in rotary kilns for iron ore pellet production, the reaction mechanisms between scaffold material and refractory bricks have been studied on the laboratory-scale. Alkali additions were used to enhance the reaction rates between the materials. The morphological changes and active chemical reactions at the refractory/scaffold material interface in the samples were characterized using scanning electron microscopy (SEM), thermal analysis (TA) and X-ray diffraction (XRD). No reaction products of alkali and hematite (Fe 2 O 3 ) were detected; however, alkali dissolves the mullite in the bricks. Phases such as nepheline (Na 2 O·Al 2 O 3 ·2SiO 2 ), kalsilite (K 2 O·Al 2 O 3 ·2SiO 2 ), leucite (K 2 O·Al 2 O 3 ·4SiO 2 ) and potassium β-alumina (K 2 O·11Al 2 O 3 ) were formed as a consequence of reactions between alkali and the bricks.
Mommen, G.P.M.; Waterbeemd, van de B.; Meiring, H.D.; Kersten, G.; Heck, A.J.R.; Jong, de A.P.J.M.
2012-01-01
A positional proteomics strategy for global N-proteome analysis is presented based on phospho tagging (PTAG) of internal peptides followed by depletion by titanium dioxide (TiO2) affinity chromatography. Therefore, N-terminal and lysine amino groups are initially completely dimethylated with
Li-hui, Zheng; Xiao-qing, He; Li-xia, Fu; Xiang-chun, Wang
2009-02-01
Water-based micro-bubble drilling fluid, which is used to exploit depleted reservoirs, is a complicated multiphase flow system that is composed of gas, water, oil, polymer, surfactants and solids. The gas phase is separate from bulk water by two layers and three membranes. They are "surface tension reducing membrane", "high viscosity layer", "high viscosity fixing membrane", "compatibility enhancing membrane" and "concentration transition layer of liner high polymer (LHP) & surfactants" from every gas phase centre to the bulk water. "Surface tension reducing membrane", "high viscosity layer" and "high viscosity fixing membrane" bond closely to pack air forming "air-bag", "compatibility enhancing membrane" and "concentration transition layer of LHP & surfactants" absorb outside "air-bag" to form "incompact zone". From another point of view, "air-bag" and "incompact zone" compose micro-bubble. Dynamic changes of "incompact zone" enable micro-bubble to exist lonely or aggregate together, and lead the whole fluid, which can wet both hydrophilic and hydrophobic surface, to possess very high viscosity at an extremely low shear rate but to possess good fluidity at a higher shear rate. When the water-based micro-bubble drilling fluid encounters leakage zones, it will automatically regulate the sizes and shapes of the bubbles according to the slot width of fracture, the height of cavern as well as the aperture of openings, or seal them by making use of high viscosity of the system at a very low shear rate. Measurements of the rheological parameters indicate that water-based micro-bubble drilling fluid has very high plastic viscosity, yield point, initial gel, final gel and high ratio of yield point and plastic viscosity. All of these properties make the multiphase flow system meet the requirements of petroleum drilling industry. Research on interface between gas and bulk water of this multiphase flow system can provide us with information of synthesizing effective agents to
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations
DEFF Research Database (Denmark)
Pettersen, E. E.; Demazire, C.; Jareteg, K.
2015-01-01
that corresponds to the real part of the neutron balance, and one that corresponds to the imaginary part. The two equivalent problems are in nature similar to two subcritical systems driven by external neutron sources, and can thus be treated as such in a Monte Carlo framework. The definition of these two...... equivalent problems nevertheless requires the possibility to modify the macroscopic cross-sections, and we use the work of Kuijper, van der Marck and Hogenbirk to define group-wise macroscopic cross-sections in MCNP [1]. The method is illustrated in this paper at a frequency of 1 Hz, for which only the real...
International Nuclear Information System (INIS)
Christoforou, Stavros; Hoogenboom, J. Eduard
2011-01-01
A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k eff estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)
Depleted uranium management alternatives
Energy Technology Data Exchange (ETDEWEB)
Hertzler, T.J.; Nishimoto, D.D.
1994-08-01
This report evaluates two management alternatives for Department of Energy depleted uranium: continued storage as uranium hexafluoride, and conversion to uranium metal and fabrication to shielding for spent nuclear fuel containers. The results will be used to compare the costs with other alternatives, such as disposal. Cost estimates for the continued storage alternative are based on a life-cycle of 27 years through the year 2020. Cost estimates for the recycle alternative are based on existing conversion process costs and Capital costs for fabricating the containers. Additionally, the recycle alternative accounts for costs associated with intermediate product resale and secondary waste disposal for materials generated during the conversion process.
A Monte Carlo and continuum study of mechanical properties of nanoparticle based films
Energy Technology Data Exchange (ETDEWEB)
Ogunsola, Oluwatosin; Ehrman, Sheryl [University of Maryland, Department of Chemical and Biomolecular Engineering, Chemical and Nuclear Engineering Building (United States)], E-mail: sehrman@eng.umd.edu
2008-01-15
A combination Monte Carlo and equivalent-continuum simulation approach was used to investigate the structure-mechanical property relationships of titania nanoparticle deposits. Films of titania composed of nanoparticle aggregates were simulated using a Monte Carlo approach with diffusion-limited aggregation. Each aggregate in the simulation is fractal-like and random in structure. In the film structure, it is assumed that bond strength is a function of distance with two limiting values for the bond strengths: one representing the strong chemical bond between the particles at closest proximity in the aggregate and the other representing the weak van der Waals bond between particles from different aggregates. The Young's modulus of the film is estimated using an equivalent-continuum modeling approach, and the influences of particle diameter (5-100 nm) and aggregate size (3-400 particles per aggregate) on predicted Young's modulus are investigated. The Young's modulus is observed to increase with a decrease in primary particle size and is independent of the size of the aggregates deposited. Decreasing porosity resulted in an increase in Young's modulus as expected from results reported previously in the literature.
A Monte Carlo and continuum study of mechanical properties of nanoparticle based films
International Nuclear Information System (INIS)
Ogunsola, Oluwatosin; Ehrman, Sheryl
2008-01-01
A combination Monte Carlo and equivalent-continuum simulation approach was used to investigate the structure-mechanical property relationships of titania nanoparticle deposits. Films of titania composed of nanoparticle aggregates were simulated using a Monte Carlo approach with diffusion-limited aggregation. Each aggregate in the simulation is fractal-like and random in structure. In the film structure, it is assumed that bond strength is a function of distance with two limiting values for the bond strengths: one representing the strong chemical bond between the particles at closest proximity in the aggregate and the other representing the weak van der Waals bond between particles from different aggregates. The Young's modulus of the film is estimated using an equivalent-continuum modeling approach, and the influences of particle diameter (5-100 nm) and aggregate size (3-400 particles per aggregate) on predicted Young's modulus are investigated. The Young's modulus is observed to increase with a decrease in primary particle size and is independent of the size of the aggregates deposited. Decreasing porosity resulted in an increase in Young's modulus as expected from results reported previously in the literature
Energy Technology Data Exchange (ETDEWEB)
Burke, TImothy P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Martin, William R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Directory of Open Access Journals (Sweden)
Timothy J Kyng
2014-03-01
Full Text Available The economic valuation of complex financial contracts is often done using Monte-Carlo simulation. We show how to implement this approach using Excel. We discuss Monte-Carlo evaluation for standard single asset European options and then demonstrate how the basic ideas may be extended to evaluate options with exotic multi-asset multi-period features. Single asset option evaluation becomes a special case. We use a typical Executive Stock Option to motivate the discussion, which we analyse using novel theory developed in our previous works. We demonstrate the simulation of the multivariate normal distribution and the multivariate Log-Normal distribution using the Cholesky Square Root of a covariance matrix for replicating the correlation structure in the multi-asset, multi period simulation required for estimating the economic value of the contract. We do this in the standard Black Scholes framework with constant parameters. Excel implementation provides many pedagogical merits due to its relative transparency and simplicity for students. This approach also has relevance to industry due to the widespread use of Excel by practitioners and for graduates who may desire to work in the finance industry. This allows students to be able to price complex financial contracts for which an analytic approach is intractable.
Jedrychowski, M.; Bacroix, B.; Salman, O. U.; Tarasiuk, J.; Wronski, S.
2015-08-01
The work focuses on the influence of moderate plastic deformation on subsequent partial recrystallization of hexagonal zirconium (Zr702). In the considered case, strain induced boundary migration (SIBM) is assumed to be the dominating recrystallization mechanism. This hypothesis is analyzed and tested in detail using experimental EBSD-OIM data and Monte Carlo computer simulations. An EBSD investigation is performed on zirconium samples, which were channel-die compressed in two perpendicular directions: normal direction (ND) and transverse direction (TD) of the initial material sheet. The maximal applied strain was below 17%. Then, samples were briefly annealed in order to achieve a partly recrystallized state. Obtained EBSD data were analyzed in terms of texture evolution associated with a microstructural characterization, including: kernel average misorientation (KAM), grain orientation spread (GOS), twinning, grain size distributions, description of grain boundary regions. In parallel, Monte Carlo Potts model combined with experimental microstructures was employed in order to verify two main recrystallization scenarios: SIBM driven growth from deformed sub-grains and classical growth of recrystallization nuclei. It is concluded that simulation results provided by the SIBM model are in a good agreement with experimental data in terms of texture as well as microstructural evolution.
Gudjonson, Herman; Kats, Mikhail A.; Liu, Kun; Nie, Zhihong; Kumacheva, Eugenia; Capasso, Federico
2014-01-01
Many experimental systems consist of large ensembles of uncoupled or weakly interacting elements operating as a single whole; this is particularly the case for applications in nano-optics and plasmonics, including colloidal solutions, plasmonic or dielectric nanoparticles on a substrate, antenna arrays, and others. In such experiments, measurements of the optical spectra of ensembles will differ from measurements of the independent elements as a result of small variations from element to element (also known as polydispersity) even if these elements are designed to be identical. In particular, sharp spectral features arising from narrow-band resonances will tend to appear broader and can even be washed out completely. Here, we explore this effect of inhomogeneous broadening as it occurs in colloidal nanopolymers comprising self-assembled nanorod chains in solution. Using a technique combining finite-difference time-domain simulations and Monte Carlo sampling, we predict the inhomogeneously broadened optical spectra of these colloidal nanopolymers and observe significant qualitative differences compared with the unbroadened spectra. The approach combining an electromagnetic simulation technique with Monte Carlo sampling is widely applicable for quantifying the effects of inhomogeneous broadening in a variety of physical systems, including those with many degrees of freedom that are otherwise computationally intractable. PMID:24469797
Energy Technology Data Exchange (ETDEWEB)
Al-Subeihi, Ala' A.A., E-mail: subeihi@yahoo.com [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority (ASEZA), P. O. Box 2565, Aqaba 77110 (Jordan); Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Bladeren, Peter J. van [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Nestec S.A., Avenue Nestlé 55, 1800 Vevey (Switzerland); Rietjens, Ivonne M.C.M.; Punt, Ans [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands)
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling
Pande, S.; Shafiei, M.
2016-12-01
Markov chain Monte Carlo (MCMC) methods have been applied in many hydrologic studies to explore posterior parameter distributions within a Bayesian framework. Accurate estimation of posterior parameter distributions is key to reliably estimate marginal likelihood functions and hence to reliably estimate measures of Bayesian complexity. This paper introduces an alternative to well-known random walk based MCMC samplers. An Adaptive Kernel Density Independence Sampling based Monte Carlo Sampling (A-KISMCS) is proposed. A-KISMCS uses an independence sampler with Metropolis-Hastings (M-H) updates which ensures that candidate observations are drawn independently of the current state of a chain. This ensures efficient exploration of the target distribution. The bandwidth of the kernel density estimator is also adapted online in order to increase its accuracy and ensure fast convergence to a target distribution. The performance of A-KISMCS is tested on one several case studies, including synthetic and real world case studies of hydrological modelling and compared with Differential Evolution Adaptive Metropolis (DREAM-zs), which is fundamentally based on random walk sampling with differential evolution. Results show that while DREAM-zs converges to slightly sharper posterior densities, A-KISMCS is slightly more efficient in tracking the mode of the posteriors.
International Nuclear Information System (INIS)
Roy, Arup Singha; Palani Selvam, T.; Raman, Anand; Raja, V.; Chaudhury, Probal
2014-01-01
Over the years, various types of tritium-in-air monitors have been designed and developed based on different principles. Ionization chamber, proportional counter and scintillation detector systems are few among them. A plastic scintillator based, flow-cell type online tritium-in-air monitoring system was developed for online monitoring of tritium in air. The value of the scintillator mass inside the cell-volume, which maximizes the response of the detector system, should be obtained to get maximum efficiency. The present study is aimed to optimize the amount of mass of the plastic scintillator film for the flow-cell based tritium monitoring instrument so that maximum efficiency is achieved. The Monte Carlo based EGSnrc code system has been used for this purpose
Li, Pu; Chen, Bing; Li, Zelin; Zheng, Xiao; Wu, Hongjing; Jing, Liang; Lee, Kenneth
2014-09-15
In this paper, a Monte Carlo simulation based two-stage adaptive resonance theory mapping (MC-TSAM) model was developed to classify a given site into distinguished zones representing different levels of offshore Oil Spill Vulnerability Index (OSVI). It consisted of an adaptive resonance theory (ART) module, an ART Mapping module, and a centroid determination module. Monte Carlo simulation was integrated with the TSAM approach to address uncertainties that widely exist in site conditions. The applicability of the proposed model was validated by classifying a large coastal area, which was surrounded by potential oil spill sources, based on 12 features. Statistical analysis of the results indicated that the classification process was affected by multiple features instead of one single feature. The classification results also provided the least or desired number of zones which can sufficiently represent the levels of offshore OSVI in an area under uncertainty and complexity, saving time and budget in spill monitoring and response. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wesseling, Sebastiaan; Joles, Jaap A.; van Goor, Harry; Bluyssen, Hans A.; Kemmeren, Patrick; Holstege, Frank C.; Koomans, Hein A.; Braam, Branko
2007-01-01
Nitric oxide (NO) depletion in rats induces severe endothelial dysfunction within 4 days. Subsequently, hypertension and renal injury develop, which are ameliorated by alpha-tocopherol (VitE) cotreatment. The hypothesis of the present study was that NO synthase (NOS) inhibition induces a renal
PC-based process distribution to solve iterative Monte Carlo simulations in physical dosimetry
International Nuclear Information System (INIS)
Leal, A.; Sanchez-Doblado, F.; Perucha, M.; Rincon, M.; Carrasco, E.; Bernal, C.
2001-01-01
A distribution model to simulate physical dosimetry measurements with Monte Carlo (MC) techniques has been developed. This approach is indicated to solve the simulations where there are continuous changes of measurement conditions (and hence of the input parameters) such as a TPR curve or the estimation of the resolution limit of an optimal densitometer in the case of small field profiles. As a comparison, a high resolution scan for narrow beams with no iterative process is presented. The model has been installed on a network PCs without any resident software. The only requirement for these PCs has been a small and temporal Linux partition in the hard disks and to be connecting by the net with our server PC. (orig.)
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Energy Technology Data Exchange (ETDEWEB)
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Howitz, S; Schwedas, M; Wiezorek, T; Zink, K
2017-10-12
Reference dosimetry by means of clinical linear accelerators in high-energy photon fields requires the determination of the beam quality specifier TPR 20,10 , which characterizes the relative particle flux density of the photon beam. The measurement of TPR 20,10 has to be performed in homogenous photon beams of size 10×10cm 2 with a focus-detector distance of 100cm. These requirements cannot be fulfilled by TomoTherapy treatment units from Accuray. The TomoTherapy unit provides a flattening-filter-free photon fan beam with a maximum field width of 40cm and field lengths of 1.0cm, 2.5cm and 5.0cm at a focus-isocenter distance of 85cm. For the determination of the beam quality specifier from measurements under nonstandard reference conditions Sauer and Palmans proposed experiment-based fit functions. Moreover, Sauer recommends considering the impact of the flattening-filter-free beam on the measured data. To verify these fit functions, in the present study a Monte Carlo based model of the treatment head of a TomoTherapyHD unit was designed and commissioned with existing beam data of our clinical TomoTherapy machine. Depth dose curves and dose profiles were in agreement within 1.5% between experimental and Monte Carlo-based data. Based on the fit functions from Sauer and Palmans the beam quality specifier TPR 20,10 was determined from field sizes 5×5cm 2 , 10×5cm 2 , 20×5cm 2 and 40×5cm 2 based on dosimetric measurements and Monte Carlo simulations. The mean value from all experimental values of TPR 20,10 resulted in TPR 20,10 ¯=0.635±0.4%. The impact of the non-homogenous field due to the flattening-filter-free beam was negligible for field sizes below 20×5cm 2 . The beam quality specifier calculated by Monte Carlo simulations was TPR 20,10 =0.628 and TPR 20,10 =0.631 for two different calculation methods. The stopping power ratio water-to-air s w,a Δ directly depends on the beam quality specifier. The value determined from all experimental TPR 20,10 data
Kouznetsov, A.; Cully, C. M.
2017-12-01
During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.
International Nuclear Information System (INIS)
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-01-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm
Monte Carlo simulation based study of a proposed multileaf collimator for a telecobalt machine
International Nuclear Information System (INIS)
Sahani, G.; Dash Sharma, P. K.; Hussain, S. A.; Dutt Sharma, Sunil; Sharma, D. N.
2013-01-01
Purpose: The objective of the present work was to propose a design of a secondary multileaf collimator (MLC) for a telecobalt machine and optimize its design features through Monte Carlo simulation. Methods: The proposed MLC design consists of 72 leaves (36 leaf pairs) with additional jaws perpendicular to leaf motion having the capability of shaping a maximum square field size of 35 × 35 cm 2 . The projected widths at isocenter of each of the central 34 leaf pairs and 2 peripheral leaf pairs are 10 and 5 mm, respectively. The ends of the leaves and the x-jaws were optimized to obtain acceptable values of dosimetric and leakage parameters. Monte Carlo N-Particle code was used for generating beam profiles and depth dose curves and estimating the leakage radiation through the MLC. A water phantom of dimension 50 × 50 × 40 cm 3 with an array of voxels (4 × 0.3 × 0.6 cm 3 = 0.72 cm 3 ) was used for the study of dosimetric and leakage characteristics of the MLC. Output files generated for beam profiles were exported to the PTW radiation field analyzer software through locally developed software for analysis of beam profiles in order to evaluate radiation field width, beam flatness, symmetry, and beam penumbra. Results: The optimized version of the MLC can define radiation fields of up to 35 × 35 cm 2 within the prescribed tolerance values of 2 mm. The flatness and symmetry were found to be well within the acceptable tolerance value of 3%. The penumbra for a 10 × 10 cm 2 field size is 10.7 mm which is less than the generally acceptable value of 12 mm for a telecobalt machine. The maximum and average radiation leakage through the MLC were found to be 0.74% and 0.41% which are well below the International Electrotechnical Commission recommended tolerance values of 2% and 0.75%, respectively. The maximum leakage through the leaf ends in closed condition was observed to be 8.6% which is less than the values reported for other MLCs designed for medical linear
Sustainable Queuing-Network Design for Airport Security Based on the Monte Carlo Method
Directory of Open Access Journals (Sweden)
Xiangqian Xu
2018-01-01
Full Text Available The design of airport queuing networks is a significant research field currently for researchers. Many factors must to be considered in order to achieve the optimized strategies, including the passenger flow volume, boarding time, and boarding order of passengers. Optimizing these factors lead to the sustainable development of the queuing network, which currently faces a few difficulties. In particular, the high variance in checkpoint lines can be extremely costly to passengers as they arrive unduly early or possibly miss their scheduled flights. In this article, the Monte Carlo method is used to design the queuing network so as to achieve sustainable development. Thereafter, a network diagram is used to determine the critical working point, and design a structurally and functionally sustainable network. Finally, a case study for a sustainable queuing-network design in the airport is conducted to verify the efficiency of the proposed model. Specifically, three sustainable queuing-network design solutions are proposed, all of which not only maintain the same standards of security, but also increase checkpoint throughput and reduce passenger waiting time variance.
Monte Carlo based unit commitment procedures for the deregulated market environment
International Nuclear Information System (INIS)
Granelli, G.P.; Marannino, P.; Montagna, M.; Zanellini, F.
2006-01-01
The unit commitment problem, originally conceived in the framework of short term operation of vertically integrated utilities, needs a thorough re-examination in the light of the ongoing transition towards the open electricity market environment. In this work the problem is re-formulated to adapt unit commitment to the viewpoint of a generation company (GENCO) which is no longer bound to satisfy its load, but is willing to maximize its profits. Moreover, with reference to the present day situation in many countries, the presence of a GENCO (the former monopolist) which is in the position of exerting the market power, requires a careful analysis to be carried out considering the different perspectives of a price taker and of the price maker GENCO. Unit commitment is thus shown to lead to a couple of distinct, yet slightly different problems. The unavoidable uncertainties in load profile and price behaviour over the time period of interest are also taken into account by means of a Monte Carlo simulation. Both the forecasted loads and prices are handled as random variables with a normal multivariate distribution. The correlation between the random input variables corresponding to successive hours of the day was considered by carrying out a statistical analysis of actual load and price data. The whole procedure was tested making use of reasonable approximations of the actual data of the thermal generation units available to come actual GENCOs operating in Italy. (author)
Monte Carlo based water/medium stopping-power ratios for various ICRP and ICRU tissues
International Nuclear Information System (INIS)
Fernandez-Varea, Jose M; Carrasco, Pablo; Panettieri, Vanessa; Brualla, Lorenzo
2007-01-01
Water/medium stopping-power ratios, s w,m , have been calculated for several ICRP and ICRU tissues, namely adipose tissue, brain, cortical bone, liver, lung (deflated and inflated) and spongiosa. The considered clinical beams were 6 and 18 MV x-rays and the field size was 10 x 10 cm 2 . Fluence distributions were scored at a depth of 10 cm using the Monte Carlo code PENELOPE. The collision stopping powers for the studied tissues were evaluated employing the formalism of ICRU Report 37 (1984 Stopping Powers for Electrons and Positrons (Bethesda, MD: ICRU)). The Bragg-Gray values of s w,m calculated with these ingredients range from about 0.98 (adipose tissue) to nearly 1.14 (cortical bone), displaying a rather small variation with beam quality. Excellent agreement, to within 0.1%, is found with stopping-power ratios reported by Siebers et al (2000a Phys. Med. Biol. 45 983-95) for cortical bone, inflated lung and spongiosa. In the case of cortical bone, s w,m changes approximately 2% when either ICRP or ICRU compositions are adopted, whereas the stopping-power ratios of lung, brain and adipose tissue are less sensitive to the selected composition. The mass density of lung also influences the calculated values of s w,m , reducing them by around 1% (6 MV) and 2% (18 MV) when going from deflated to inflated lung
Radiation dose performance in the triple-source CT based on a Monte Carlo method
Yang, Zhenyu; Zhao, Jun
2012-10-01
Multiple-source structure is promising in the development of computed tomography, for it could effectively eliminate motion artifacts in the cardiac scanning and other time-critical implementations with high temporal resolution. However, concerns about the dose performance shade this technique, as few reports on the evaluation of dose performance of multiple-source CT have been proposed for judgment. Our experiments focus on the dose performance of one specific multiple-source CT geometry, the triple-source CT scanner, whose theories and implementations have already been well-established and testified by our previous work. We have modeled the triple-source CT geometry with the help of EGSnrc Monte Carlo radiation transport code system, and simulated the CT examinations of a digital chest phantom with our modified version of the software, using x-ray spectrum according to the data of physical tube. Single-source CT geometry is also estimated and tested for evaluation and comparison. Absorbed dose of each organ is calculated according to its real physics characteristics. Results show that the absorbed radiation dose of organs with the triple-source CT is almost equal to that with the single-source CT system. As the advantage of temporal resolution, the triple-source CT would be a better choice in the x-ray cardiac examination.
Accuracy assessment of a new Monte Carlo based burnup computer code
International Nuclear Information System (INIS)
El Bakkari, B.; ElBardouni, T.; Nacir, B.; ElYounoussi, C.; Boulaich, Y.; Meroun, O.; Zoubair, M.; Chakir, E.
2012-01-01
Highlights: ► A new burnup code called BUCAL1 was developed. ► BUCAL1 uses the MCNP tallies directly in the calculation of the isotopic inventories. ► Validation of BUCAL1 was done by code to code comparison using VVER-1000 LEU Benchmark Assembly. ► Differences from BM value were found to be ± 600 pcm for k ∞ and ±6% for the isotopic compositions. ► The effect on reactivity due to the burnup of Gd isotopes is well reproduced by BUCAL1. - Abstract: This study aims to test for the suitability and accuracy of a new home-made Monte Carlo burnup code, called BUCAL1, by investigating and predicting the neutronic behavior of a “VVER-1000 LEU Assembly Computational Benchmark”, at lattice level. BUCAL1 uses MCNP tally information directly in the computation; this approach allows performing straightforward and accurate calculation without having to use the calculated group fluxes to perform transmutation analysis in a separate code. ENDF/B-VII evaluated nuclear data library was used in these calculations. Processing of the data library is performed using recent updates of NJOY99 system. Code to code comparisons with the reported Nuclear OECD/NEA results are presented and analyzed.
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas
2003-01-01
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T
2011-11-21
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.
The MC21 Monte Carlo Transport Code
International Nuclear Information System (INIS)
Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H
2007-01-01
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities
Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H
2001-01-01
Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.
Two schemes for quantitative photoacoustic tomography based on Monte Carlo simulation
International Nuclear Information System (INIS)
Liu, Yubin; Yuan, Zhen; Jiang, Huabei
2016-01-01
Purpose: The aim of this study was to develop novel methods for photoacoustically determining the optical absorption coefficient of biological tissues using Monte Carlo (MC) simulation. Methods: In this study, the authors propose two quantitative photoacoustic tomography (PAT) methods for mapping the optical absorption coefficient. The reconstruction methods combine conventional PAT with MC simulation in a novel way to determine the optical absorption coefficient of biological tissues or organs. Specifically, the authors’ two schemes were theoretically and experimentally examined using simulations, tissue-mimicking phantoms, ex vivo, and in vivo tests. In particular, the authors explored these methods using several objects with different absorption contrasts embedded in turbid media and by using high-absorption media when the diffusion approximation was not effective at describing the photon transport. Results: The simulations and experimental tests showed that the reconstructions were quantitatively accurate in terms of the locations, sizes, and optical properties of the targets. The positions of the recovered targets were accessed by the property profiles, where the authors discovered that the off center error was less than 0.1 mm for the circular target. Meanwhile, the sizes and quantitative optical properties of the targets were quantified by estimating the full width half maximum of the optical absorption property. Interestingly, for the reconstructed sizes, the authors discovered that the errors ranged from 0 for relatively small-size targets to 26% for relatively large-size targets whereas for the recovered optical properties, the errors ranged from 0% to 12.5% for different cases. Conclusions: The authors found that their methods can quantitatively reconstruct absorbing objects of different sizes and optical contrasts even when the diffusion approximation is unable to accurately describe the photon propagation in biological tissues. In particular, their
GPU-BASED MONTE CARLO DUST RADIATIVE TRANSFER SCHEME APPLIED TO ACTIVE GALACTIC NUCLEI
International Nuclear Information System (INIS)
Heymann, Frank; Siebenmorgen, Ralf
2012-01-01
A three-dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing-time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons. Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting procedure and at high spatial resolution by a vectorized ray tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust temperatures of one- and two-dimensional benchmarks are reproduced at high precision. The parallelization capability of various MC algorithms is analyzed and included in our treatment. We utilize the Lucy algorithm for the optical thin case where the Poisson noise is high, the iteration-free Bjorkman and Wood method to reduce the calculation time, and the Fleck and Canfield diffusion approximation for extreme optical thick cells. The code is applied to model the appearance of active galactic nuclei (AGNs) at optical and infrared wavelengths. The AGN torus is clumpy and includes fluffy composite grains of various sizes made up of silicates and carbon. The dependence of the SED on the number of clumps in the torus and the viewing angle is studied. The appearance of the 10 μm silicate features in absorption or emission is discussed. The SED of the radio-loud quasar 3C 249.1 is fit by the AGN model and a cirrus component to account for the far-infrared emission.
Messina, Luca; Castin, Nicolas; Domain, Christophe; Olsson, Pär
2017-02-01
The quality of kinetic Monte Carlo (KMC) simulations of microstructure evolution in alloys relies on the parametrization of point-defect migration rates, which are complex functions of the local chemical composition and can be calculated accurately with ab initio methods. However, constructing reliable models that ensure the best possible transfer of physical information from ab initio to KMC is a challenging task. This work presents an innovative approach, where the transition rates are predicted by artificial neural networks trained on a database of 2000 migration barriers, obtained with density functional theory (DFT) in place of interatomic potentials. The method is tested on copper precipitation in thermally aged iron alloys, by means of a hybrid atomistic-object KMC model. For the object part of the model, the stability and mobility properties of copper-vacancy clusters are analyzed by means of independent atomistic KMC simulations, driven by the same neural networks. The cluster diffusion coefficients and mean free paths are found to increase with size, confirming the dominant role of coarsening of medium- and large-sized clusters in the precipitation kinetics. The evolution under thermal aging is in better agreement with experiments with respect to a previous interatomic-potential model, especially concerning the experiment time scales. However, the model underestimates the solubility of copper in iron due to the excessively high solution energy predicted by the chosen DFT method. Nevertheless, this work proves the capability of neural networks to transfer complex ab initio physical properties to higher-scale models, and facilitates the extension to systems with increasing chemical complexity, setting the ground for reliable microstructure evolution simulations in a wide range of alloys and applications.
Peak Skin and Eye Lens Radiation Dose From Brain Perfusion CT Based on Monte Carlo Simulation
Zhang, Di; Cagnon, Chris H.; Pablo Villablanca, J.; McCollough, Cynthia H.; Cody, Dianna D.; Stevens, Donna M.; Zankl, Maria; Demarco, John J.; Turner, Adam C.; Khatonabadi, Maryam; McNitt-Gray, Michael F.
2014-01-01
OBJECTIVE. The purpose of our study was to accurately estimate the radiation dose to skin and the eye lens from clinical CT brain perfusion studies, investigate how well scanner output (expressed as volume CT dose index [CTDIvol]) matches these estimated doses, and investigate the efficacy of eye lens dose reduction techniques. MATERIALS AND METHODS. Peak skin dose and eye lens dose were estimated using Monte Carlo simulation methods on a voxelized patient model and 64-MDCT scanners from four major manufacturers. A range of clinical protocols was evaluated. CTDIvol for each scanner was obtained from the scanner console. Dose reduction to the eye lens was evaluated for various gantry tilt angles as well as scan locations. RESULTS. Peak skin dose and eye lens dose ranged from 81 mGy to 348 mGy, depending on the scanner and protocol used. Peak skin dose and eye lens dose were observed to be 66–79% and 59–63%, respectively, of the CTDIvol values reported by the scanners. The eye lens dose was significantly reduced when the eye lenses were not directly irradiated. CONCLUSION. CTDIvol should not be interpreted as patient dose; this study has shown it to overestimate dose to the skin or eye lens. These results may be used to provide more accurate estimates of actual dose to ensure that protocols are operated safely below thresholds. Tilting the gantry or moving the scanning region further away from the eyes are effective for reducing lens dose in clinical practice. These actions should be considered when they are consistent with the clinical task and patient anatomy. PMID:22268186
Peak skin and eye lens radiation dose from brain perfusion CT based on Monte Carlo simulation.
Zhang, Di; Cagnon, Chris H; Villablanca, J Pablo; McCollough, Cynthia H; Cody, Dianna D; Stevens, Donna M; Zankl, Maria; Demarco, John J; Turner, Adam C; Khatonabadi, Maryam; McNitt-Gray, Michael F
2012-02-01
The purpose of our study was to accurately estimate the radiation dose to skin and the eye lens from clinical CT brain perfusion studies, investigate how well scanner output (expressed as volume CT dose index [CTDI(vol)]) matches these estimated doses, and investigate the efficacy of eye lens dose reduction techniques. Peak skin dose and eye lens dose were estimated using Monte Carlo simulation methods on a voxelized patient model and 64-MDCT scanners from four major manufacturers. A range of clinical protocols was evaluated. CTDI(vol) for each scanner was obtained from the scanner console. Dose reduction to the eye lens was evaluated for various gantry tilt angles as well as scan locations. Peak skin dose and eye lens dose ranged from 81 mGy to 348 mGy, depending on the scanner and protocol used. Peak skin dose and eye lens dose were observed to be 66-79% and 59-63%, respectively, of the CTDI(vol) values reported by the scanners. The eye lens dose was significantly reduced when the eye lenses were not directly irradiated. CTDI(vol) should not be interpreted as patient dose; this study has shown it to overestimate dose to the skin or eye lens. These results may be used to provide more accurate estimates of actual dose to ensure that protocols are operated safely below thresholds. Tilting the gantry or moving the scanning region further away from the eyes are effective for reducing lens dose in clinical practice. These actions should be considered when they are consistent with the clinical task and patient anatomy.
Monte Carlo-based diode design for correction-less small field dosimetry.
Charles, P H; Crowe, S B; Kairn, T; Knight, R T; Hill, B; Kenny, J; Langton, C M; Trapp, J V
2013-07-07
Due to their small collecting volume, diodes are commonly used in small field dosimetry. However, the relative sensitivity of a diode increases with decreasing small field size. Conversely, small air gaps have been shown to cause a significant decrease in the sensitivity of a detector as the field size is decreased. Therefore, this study uses Monte Carlo simulations to look at introducing air upstream to diodes such that they measure with a constant sensitivity across all field sizes in small field dosimetry. Varying thicknesses of air were introduced onto the upstream end of two commercial diodes (PTW 60016 photon diode and PTW 60017 electron diode), as well as a theoretical unenclosed silicon chip using field sizes as small as 5 mm × 5 mm. The metric D(w,Q)/D(Det,Q) used in this study represents the ratio of the dose to a point of water to the dose to the diode active volume, for a particular field size and location. The optimal thickness of air required to provide a constant sensitivity across all small field sizes was found by plotting D(w,Q)/D(Det,Q) as a function of introduced air gap size for various field sizes, and finding the intersection point of these plots. That is, the point at which D(w,Q)/D(Det,Q) was constant for all field sizes was found. The optimal thickness of air was calculated to be 3.3, 1.15 and 0.10 mm for the photon diode, electron diode and unenclosed silicon chip, respectively. The variation in these results was due to the different design of each detector. When calculated with the new diode design incorporating the upstream air gap, k(f(clin),f(msr))(Q(clin),Q(msr)) was equal to unity to within statistical uncertainty (0.5%) for all three diodes. Cross-axis profile measurements were also improved with the new detector design. The upstream air gap could be implanted on the commercial diodes via a cap consisting of the air cavity surrounded by water equivalent material. The results for the unclosed silicon chip show that an ideal small
Monte Carlo-based development of a shield and total background estimation for the COBRA experiment
International Nuclear Information System (INIS)
Heidrich, Nadine
2014-11-01
The COBRA experiment aims for the measurement of the neutrinoless double beta decay and thus for the determination the effective Majorana mass of the neutrino. To be competitive with other next-generation experiments the background rate has to be in the order of 10 -3 counts/kg/keV/yr, which is a challenging criterion. This thesis deals with the development of a shield design and the calculation of the expected total background rate for the large scale COBRA experiment containing 13824 6 cm 3 CdZnTe detectors. For the development of a shield single-layer and multi-layer shields were investigated and a shield design was optimized concerning high-energy muon-induced neutrons. As the best design the combination of 10 cm boron doped polyethylene as outermost layer, 20 cm lead and 10 cm copper as innermost layer were determined. It showed the best performance regarding neutron attenuation as well as (n, γ) self-shielding effects leading to a negligible background rate of less than 2.10 -6 counts/kg/keV/yr. Additionally. the shield with a thickness of 40 cm is compact and costeffective. In the next step the expected total background rate was computed taking into account individual setup parts and various background sources including natural and man-made radioactivity, cosmic ray-induced background and thermal neutrons. Furthermore, a comparison of measured data from the COBRA demonstrator setup with Monte Carlo data was used to calculate reliable contamination levels of the single setup parts. The calculation was performed conservatively to prevent an underestimation. In addition, the contribution to the total background rate regarding the individual detector parts and background sources was investigated. The main portion arise from the Delrin support structure, the Glyptal lacquer followed by the circuit board of the high voltage supply. Most background events originate from particles with a quantity of 99 % in total. Regarding surface events a contribution of 26
International Nuclear Information System (INIS)
Turner, Adam C.; Zhang Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.
2009-01-01
The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called ''equivalent'' source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL 1 and HVL 2 ) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL 1 and HVL 2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL 1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types
Turner, Adam C.; Zhang, Di; Kim, Hyun J.; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Cody, Dianna D.; Stevens, Donna M.; Primak, Andrew N.; McCollough, Cynthia H.; McNitt-Gray, Michael F.
2009-01-01
The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called “equivalent” source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer’s data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL1 and HVL2) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL1 and HVL2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner’s manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types
Turner, Adam C; Zhang, Di; Kim, Hyun J; DeMarco, John J; Cagnon, Chris H; Angel, Erin; Cody, Dianna D; Stevens, Donna M; Primak, Andrew N; McCollough, Cynthia H; McNitt-Gray, Michael F
2009-06-01
The purpose of this study was to present a method for generating x-ray source models for performing Monte Carlo (MC) radiation dosimetry simulations of multidetector row CT (MDCT) scanners. These so-called "equivalent" source models consist of an energy spectrum and filtration description that are generated based wholly on the measured values and can be used in place of proprietary manufacturer's data for scanner-specific MDCT MC simulations. Required measurements include the half value layers (HVL1 and HVL2) and the bowtie profile (exposure values across the fan beam) for the MDCT scanner of interest. Using these measured values, a method was described (a) to numerically construct a spectrum with the calculated HVLs approximately equal to those measured (equivalent spectrum) and then (b) to determine a filtration scheme (equivalent filter) that attenuates the equivalent spectrum in a similar fashion as the actual filtration attenuates the actual x-ray beam, as measured by the bowtie profile measurements. Using this method, two types of equivalent source models were generated: One using a spectrum based on both HVL1 and HVL2 measurements and its corresponding filtration scheme and the second consisting of a spectrum based only on the measured HVL1 and its corresponding filtration scheme. Finally, a third type of source model was built based on the spectrum and filtration data provided by the scanner's manufacturer. MC simulations using each of these three source model types were evaluated by comparing the accuracy of multiple CT dose index (CTDI) simulations to measured CTDI values for 64-slice scanners from the four major MDCT manufacturers. Comprehensive evaluations were carried out for each scanner using each kVp and bowtie filter combination available. CTDI experiments were performed for both head (16 cm in diameter) and body (32 cm in diameter) CTDI phantoms using both central and peripheral measurement positions. Both equivalent source model types result in
Wang, T.; Barbero, M.; Berdalovic, I.; Bespin, C.; Bhat, S.; Breugnon, P.; Caicedo, I.; Cardella, R.; Chen, Z.; Degerli, Y.; Egidos, N.; Godiot, S.; Guilloux, F.; Hemperek, T.; Hirono, T.; Krüger, H.; Kugathasan, T.; Hügging, F.; Marin Tobon, C. A.; Moustakas, K.; Pangaud, P.; Schwemling, P.; Pernegger, H.; Pohl, D.-L.; Rozanov, A.; Rymaszewski, P.; Snoeys, W.; Wermes, N.
2018-03-01
Depleted monolithic active pixel sensors (DMAPS), which exploit high voltage and/or high resistivity add-ons of modern CMOS technologies to achieve substantial depletion in the sensing volume, have proven to have high radiation tolerance towards the requirements of ATLAS in the high-luminosity LHC era. DMAPS integrating fast readout architectures are currently being developed as promising candidates for the outer pixel layers of the future ATLAS Inner Tracker, which will be installed during the phase II upgrade of ATLAS around year 2025. In this work, two DMAPS prototype designs, named LF-Monopix and TJ-Monopix, are presented. LF-Monopix was fabricated in the LFoundry 150 nm CMOS technology, and TJ-Monopix has been designed in the TowerJazz 180 nm CMOS technology. Both chips employ the same readout architecture, i.e. the column drain architecture, whereas different sensor implementation concepts are pursued. The paper makes a joint description of the two prototypes, so that their technical differences and challenges can be addressed in direct comparison. First measurement results for LF-Monopix will also be shown, demonstrating for the first time a fully functional fast readout DMAPS prototype implemented in the LFoundry technology.
Bergaoui, K; Reguigui, N; Gary, C K; Brown, C; Cremer, J T; Vainionpaa, J H; Piestrup, M A
2014-12-01
An explosive detection system based on a Deuterium-Deuterium (D-D) neutron generator has been simulated using the Monte Carlo N-Particle Transport Code (MCNP5). Nuclear-based explosive detection methods can detect explosives by identifying their elemental components, especially nitrogen. Thermal neutron capture reactions have been used for detecting prompt gamma emission (10.82MeV) following radiative neutron capture by (14)N nuclei. The explosive detection system was built based on a fully high-voltage-shielded, axial D-D neutron generator with a radio frequency (RF) driven ion source and nominal yield of about 10(10) fast neutrons per second (E=2.5MeV). Polyethylene and paraffin were used as moderators with borated polyethylene and lead as neutron and gamma ray shielding, respectively. The shape and the thickness of the moderators and shields are optimized to produce the highest thermal neutron flux at the position of the explosive and the minimum total dose at the outer surfaces of the explosive detection system walls. In addition, simulation of the response functions of NaI, BGO, and LaBr3-based γ-ray detectors to different explosives is described. Copyright © 2014 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
2017-07-15
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.
Monte Carlo - Advances and Challenges
International Nuclear Information System (INIS)
Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.
2008-01-01
Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature
Lysak, Y. V.; Klimanov, V. A.; Narkevich, B. Ya
2017-01-01
One of the most difficult problems of modern radionuclide therapy (RNT) is control of the absorbed dose in pathological volume. This research presents new approach based on estimation of radiopharmaceutical (RP) accumulated activity value in tumor volume, based on planar scintigraphic images of the patient and calculated radiation transport using Monte Carlo method, including absorption and scattering in biological tissues of the patient, and elements of gamma camera itself. In our research, to obtain the data, we performed modeling scintigraphy of the vial with administered to the patient activity of RP in gamma camera, the vial was placed at the certain distance from the collimator, and the similar study was performed in identical geometry, with the same values of activity of radiopharmaceuticals in the pathological target in the body of the patient. For correct calculation results, adapted Fisher-Snyder human phantom was simulated in MCNP program. In the context of our technique, calculations were performed for different sizes of pathological targets and various tumors deeps inside patient’s body, using radiopharmaceuticals based on a mixed β-γ-radiating (131I, 177Lu), and clear β- emitting (89Sr, 90Y) therapeutic radionuclides. Presented method can be used for adequate implementing in clinical practice estimation of absorbed doses in the regions of interest on the basis of planar scintigraphy of the patient with sufficient accuracy.
Energy Technology Data Exchange (ETDEWEB)
Moskvin, V; Pirlepesov, F; Tsiamas, P; Axente, M; Lukose, R; Zhao, L; Farr, J [St. Jude Children’s Hospital, Memphis, TN (United States); Shin, J [Massachusetts General Hospital, Brookline, MA (United States)
2016-06-15
Purpose: This study provides an overview of the design and commissioning of the Monte Carlo (MC) model of the spot-scanning proton therapy nozzle and its implementation for the patient plan simulation. Methods: The Hitachi PROBEAT V scanning nozzle was simulated based on vendor specifications using the TOPAS extension of Geant4 code. FLUKA MC simulation was also utilized to provide supporting data for the main simulation. Validation of the MC model was performed using vendor provided data and measurements collected during acceptance/commissioning of the proton therapy machine. Actual patient plans using CT based treatment geometry were simulated and compared to the dose distributions produced by the treatment planning system (Varian Eclipse 13.6), and patient quality assurance measurements. In-house MATLAB scripts are used for converting DICOM data into TOPAS input files. Results: Comparison analysis of integrated depth doses (IDDs), therapeutic ranges (R90), and spot shape/sizes at different distances from the isocenter, indicate good agreement between MC and measurements. R90 agreement is within 0.15 mm across all energy tunes. IDDs and spot shapes/sizes differences are within statistical error of simulation (less than 1.5%). The MC simulated data, validated with physical measurements, were used for the commissioning of the treatment planning system. Patient geometry simulations were conducted based on the Eclipse produced DICOM plans. Conclusion: The treatment nozzle and standard option beam model were implemented in the TOPAS framework to simulate a highly conformal discrete spot-scanning proton beam system.
Kano, Masayuki; Nagao, Hiromichi; Nagata, Kenji; Ito, Shin-ichi; Sakai, Shin'ichi; Nakagawa, Shigeki; Hori, Muneo; Hirata, Naoshi
2017-04-01
Earthquakes sometimes cause serious disasters not only directly by ground motion itself but also secondarily by infrastructure damage, particularly in densely populated urban areas. To reduce these secondary disasters, it is important to rapidly evaluate seismic hazards by analyzing the seismic responses of individual structures due to the input ground motions. Such input motions are estimated utilizing an array of seismometers that are distributed more sparsely than the structures. We propose a methodology that integrates physics-based and data-driven approaches in order to obtain the seismic wavefield to be input into seismic response analysis. This study adopts the replica exchange Monte Carlo (REMC) method, which is one of the Markov chain Monte Carlo (MCMC) methods, for the estimation of the seismic wavefield together with one-dimensional local subsurface structure and source information. Numerical tests show that the REMC method is able to search the parameters related to the source and the local subsurface structure in broader parameter space than the Metropolis method, which is an ordinary MCMC method. The REMC method well reproduces the seismic wavefield consistent with the true one. In contrast, the ordinary kriging, which is a classical data-driven interpolation method for spatial data, is hardly able to reproduce the true wavefield even at low frequencies. This indicates that it is essential to take both physics-based and data-driven approaches into consideration for seismic wavefield imaging. Then the REMC method is applied to the actual waveforms observed by a dense seismic array MeSO-net (Metropolitan Seismic Observation network), in which 296 accelerometers are continuously in operation with several kilometer intervals in the Tokyo metropolitan area, Japan. The estimated wavefield within a frequency band of 0.10-0.20 Hz is absolutely consistent with the observed waveforms. Further investigation suggests that the seismic wavefield is successfully
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
International Nuclear Information System (INIS)
Ureba, A.; Palma, B. A.; Leal, A.
2011-01-01
Develop a more efficient method of optimization in relation to time, based on linear programming designed to implement a multi objective penalty function which also permits a simultaneous solution integrated boost situations considering two white volumes simultaneously.
International Nuclear Information System (INIS)
Simakov, S.P.; Fischer, U.; Moellendorff, U. von; Schmuck, I.; Konobeev, A.Yu.; Korovin, Yu.A.; Pereslavtsev, P.
2002-01-01
A newly developed computational procedure is presented for the generation of d-Li source neutrons in Monte Carlo transport calculations based on the use of evaluated double-differential d+ 6,7 Li cross section data. A new code M c DeLicious was developed as an extension to MCNP4C to enable neutronics design calculations for the d-Li based IFMIF neutron source making use of the evaluated deuteron data files. The M c DeLicious code was checked against available experimental data and calculation results of M c DeLi and MCNPX, both of which use built-in analytical models for the Li(d, xn) reaction. It is shown that M c DeLicious along with newly evaluated d+ 6,7 Li data is superior in predicting the characteristics of the d-Li neutron source. As this approach makes use of tabulated Li(d, xn) cross sections, the accuracy of the IFMIF d-Li neutron source term can be steadily improved with more advanced and validated data
Simakov, S P; Moellendorff, U V; Schmuck, I; Konobeev, A Y; Korovin, Y A; Pereslavtsev, P
2002-01-01
A newly developed computational procedure is presented for the generation of d-Li source neutrons in Monte Carlo transport calculations based on the use of evaluated double-differential d+ sup 6 sup , sup 7 Li cross section data. A new code M sup c DeLicious was developed as an extension to MCNP4C to enable neutronics design calculations for the d-Li based IFMIF neutron source making use of the evaluated deuteron data files. The M sup c DeLicious code was checked against available experimental data and calculation results of M sup c DeLi and MCNPX, both of which use built-in analytical models for the Li(d, xn) reaction. It is shown that M sup c DeLicious along with newly evaluated d+ sup 6 sup , sup 7 Li data is superior in predicting the characteristics of the d-Li neutron source. As this approach makes use of tabulated Li(d, xn) cross sections, the accuracy of the IFMIF d-Li neutron source term can be steadily improved with more advanced and validated data.
Stamenkovic, Dragan D.; Popovic, Vladimir M.
2015-02-01
Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.
Maria Jose, Gonzalez Torres; Jürgen, Henniger
2018-01-01
In order to expand the Monte Carlo transport program AMOS to particle therapy applications, the ion module is being developed in the radiation physics group (ASP) at the TU Dresden. This module simulates the three main interactions of ions in matter for the therapy energy range: elastic scattering, inelastic collisions and nuclear reactions. The simulation of the elastic scattering is based on the Binary Collision Approximation and the inelastic collisions on the Bethe-Bloch theory. The nuclear reactions, which are the focus of the module, are implemented according to a probabilistic-based model developed in the group. The developed model uses probability density functions to sample the occurrence of a nuclear reaction given the initial energy of the projectile particle as well as the energy at which this reaction will take place. The particle is transported until the reaction energy is reached and then the nuclear reaction is simulated. This approach allows a fast evaluation of the nuclear reactions. The theory and application of the proposed model will be addressed in this presentation. The results of the simulation of a proton beam colliding with tissue will also be presented. Copyright © 2017.
Shypailo, R. J.; Ellis, K. J.
2011-05-01
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Garcia, Marie-Paule; Villoing, Daphnée; McKay, Erin; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel
2015-12-01
The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit gate offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on gate to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user's imaging requirements and generates automatically command files used as input for gate. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant gate input files are generated for the virtual patient model and associated pharmacokinetics. Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body "step and shoot" acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110
International Nuclear Information System (INIS)
Garcia, Marie-Paule; Villoing, Daphnée; McKay, Erin; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel
2015-01-01
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry
Energy Technology Data Exchange (ETDEWEB)
Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, 31062 Toulouse (France); McKay, Erin [St George Hospital, Gray Street, Kogarah, New South Wales 2217 (Australia); Ferrer, Ludovic [ICO René Gauducheau, Boulevard Jacques Monod, St Herblain 44805 (France); Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila [European Institute of Oncology, Via Ripamonti 435, Milano 20141 (Italy); Bardiès, Manuel [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, Toulouse 31062 (France)
2015-12-15
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry
Energy Technology Data Exchange (ETDEWEB)
Li, Y [Tsinghua University, Beijing, Beijing (China); UT Southwestern Medical Center, Dallas, TX (United States); Tian, Z; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Song, T [Southern Medical University, Guangzhou, Guangdong (China); UT Southwestern Medical Center, Dallas, TX (United States); Wu, Z; Liu, Y [Tsinghua University, Beijing, Beijing (China)
2015-06-15
Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC into IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical
International Nuclear Information System (INIS)
Li, Y; Tian, Z; Jiang, S; Jia, X; Song, T; Wu, Z; Liu, Y
2015-01-01
Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC into IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical
JMCT Monte Carlo Simulation Analysis of BEAVRS and SG-III Shielding
Li, Deng; Gang, Li; Baoyin, Zhang; Danhua, Shangguan; Yan, Ma; Zehua, Hu; Yuanguang, Fu; Rui, Li; Dunfu, Shi; Xiaoli, Hu; Wei, Wang
2017-09-01
JMCT is a general purpose Mont Carlo neutron-photon-electron or coupled neutron/photon/electron transport code with a continuous energy and multigroup. The code has almost all functions of a general Monte Carlo code which include the various variance reduction techniques, the multi-level parallel computation of MPI and OpenMP, the domain decomposition and on-fly Doppler broadening, etc. Especially, JMCT supports the depletion calculation with TTA and CRAM methods. The input uses the CAD modelling and the calculated results use the visual output. The geometry zones, materials, tallies, depletion zones, memories and the period of random number are enough big for suit of various problems. This paper describes the application of the JMCT Monte Carlo code to the simulation of BEAVRS and SG-III shielding model. For BEAVRS model, the JMCT results of HZP status are almost the same with MC21, OpenMC and experiment. Also, we performed the coupled calculation of neutron transport and depletion in full power. The results of ten depletion steps are obtained, where the depletion regions exceed 1.5 million and 120 thousand processors to be used. Due to no coupled with thermal hydraulics, the result is only for reference. Finally, we performed the detail modelling for Chinese SG-III laser facility, where the anomalistic geometry bodies exceed 10 thousands. The flux distribution of the radiation shielding is obtain based on the mesh tally in case of Deuterium-Tritium fusion reaction. The high fidelity of JMCT has been shown.
Energy Technology Data Exchange (ETDEWEB)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
2017-07-01
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.
Kotalczyk, G.; Kruis, F. E.
2017-07-01
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named 'stochastic resolution' in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named 'random removal' in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.
Multi-period mean–variance portfolio optimization based on Monte-Carlo simulation
F. Cong (Fei); C.W. Oosterlee (Cornelis)
2016-01-01
htmlabstractWe propose a simulation-based approach for solving the constrained dynamic mean– variance portfolio managemen tproblem. For this dynamic optimization problem, we first consider a sub-optimal strategy, called the multi-stage strategy, which can be utilized in a forward fashion. Then,
Report: GPU Based Massive Parallel Kawasaki Kinetics In Monte Carlo Modelling of Lipid Microdomains
Lis, M.; Pintal, L.
2013-01-01
This paper introduces novel method of simulation of lipid biomembranes based on Metropolis Hastings algorithm and Graphic Processing Unit computational power. Method gives up to 55 times computational boost in comparison to classical computations. Extensive study of algorithm correctness is provided. Analysis of simulation results and results obtained with classical simulation methodologies are presented.
Monte carlo efficiency calibration of a neutron generator-based total-body irradiator
The increasing prevalence of obesity world-wide has focused attention on the need for accurate body composition assessments, especially of large subjects. However, many body composition measurement systems are calibrated against a single-sized phantom, often based on the standard Reference Man mode...
Shafiei, M.; Gharari, S.; Pande, S.; Bhulai, S.
2014-01-01
Posterior sampling methods are increasingly being used to describe parameter and model predictive uncertainty in hydrologic modelling. This paper proposes an alternative to random walk chains (such as DREAM-zs). We propose a sampler based on independence chains with an embedded feature of
Cully, William P.L.; Cotton, Simon L.; Scanlon, W.G.
2012-01-01
The range of potential applications for indoor and campus based personnel localisation has led researchers to create a wide spectrum of different algorithmic approaches and systems. However, the majority of the proposed systems overlook the unique radio environment presented by the human body
Wormhole Hamiltonian Monte Carlo
Lan, S; Streets, J; Shahbaba, B
2014-01-01
Copyright © 2014, Association for the Advancement of Artificial Intelligence. In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, espe...
Directory of Open Access Journals (Sweden)
Fabiana Caló
2017-01-01
Full Text Available In areas where groundwater overexploitation occurs, land subsidence triggered by aquifer compaction is observed, resulting in high socio-economic impacts for the affected communities. In this paper, we focus on the Konya region, one of the leading economic centers in the agricultural and industrial sectors in Turkey. We present a multi-source data approach aimed at investigating the complex and fragile environment of this area which is heavily affected by groundwater drawdown and ground subsidence. In particular, in order to analyze the spatial and temporal pattern of the subsidence process we use the Small BAseline Subset DInSAR technique to process two datasets of ENVISAT SAR images spanning the 2002–2010 period. The produced ground deformation maps and associated time-series allow us to detect a wide land subsidence extending for about 1200 km2 and measure vertical displacements reaching up to 10 cm in the observed time interval. DInSAR results, complemented with climatic, stratigraphic and piezometric data as well as with land-cover changes information, allow us to give more insights on the impact of climate changes and human activities on groundwater resources depletion and land subsidence.
International Nuclear Information System (INIS)
Sheu, R.J.; Chang, J.S.; Liu, Y.-W. H.
2011-01-01
Molten-Salt Reactors (MSRs) represent one of the selected categories in the GEN-IV program. This type of reactor is distinguished by the use of liquid fuel circulating in and out of the core, which makes it possible for online refueling and salt processing. However, this operation characteristic also complicates the modeling and simulation of reactor core behaviour using conventional neutronic codes. The TRITON sequence in the SCALE6 code system has been designed to provide the combined capabilities of problem-dependent cross-section processing, rigorous treatment of neutron transport, and coupled with the ORIGEN-S depletion calculations. In order to accommodate the simulation of dynamic refueling and processing scheme, an in-house program REFRESH together with a run script are developed for carrying out a series of stepwise TRITON calculations, that makes the work of analyzing the neutronic properties and performance of a MSR core design easier. As a demonstration and cross check, we have applied this method to reexamine the conceptual design of Molten Salt Actinide Recycler & Transmuter (MOSART). This paper summarizes the development of the method and preliminary results of its application on MOSART. (author)
International Nuclear Information System (INIS)
Atitoaie, Alexandru; Tanasa, Radu; Enachescu, Cristian
2012-01-01
Spin crossover compounds are photo-magnetic bistable molecular magnets with two states in thermodynamic competition: the diamagnetic low-spin state and paramagnetic high-spin state. The thermal transition between the two states is often accompanied by a wide hysteresis, premise for possible application of these materials as recording media. In this paper we study the influence of the system's size on the thermal hysteresis loops using Monte Carlo simulations based on an Arrhenius dynamics applied for an Ising like model with long- and short-range interactions. We show that using appropriate boundary conditions it is possible to reproduce both the drop of hysteresis width with decreasing particle size, the hysteresis shift towards lower temperatures and the incomplete transition, as in the available experimental data. The case of larger systems composed by several sublattices is equally treated reproducing the shrinkage of the hysteresis loop's width experimentally observed. - Highlights: ► A study concerning size effects in spin crossover nanoparticles hysteresis is presented. ► An Ising like model with short- and long-range interactions and Arrhenius dynamics is employed. ► In open boundary system the hysteresis width decreases with particle size. ► With appropriate environment, hysteresis loop is shifted towards lower temperature and transition is incomplete.
Li, Jiahao; Klee Barillas, Joaquin; Guenther, Clemens; Danzer, Michael A.
2014-02-01
Battery state monitoring is one of the key techniques in battery management systems e.g. in electric vehicles. An accurate estimation can help to improve the system performance and to prolong the battery remaining useful life. Main challenges for the state estimation for LiFePO4 batteries are the flat characteristic of open-circuit-voltage over battery state of charge (SOC) and the existence of hysteresis phenomena. Classical estimation approaches like Kalman filtering show limitations to handle nonlinear and non-Gaussian error distribution problems. In addition, uncertainties in the battery model parameters must be taken into account to describe the battery degradation. In this paper, a novel model-based method combining a Sequential Monte Carlo filter with adaptive control to determine the cell SOC and its electric impedance is presented. The applicability of this dual estimator is verified using measurement data acquired from a commercial LiFePO4 cell. Due to a better handling of the hysteresis problem, results show the benefits of the proposed method against the estimation with an Extended Kalman filter.
International Nuclear Information System (INIS)
Aslam; Prestwich, W.V.; McNeill, F.E.; Waker, A.J.
2006-01-01
The neutron irradiation facility developed at the McMaster University 3 MV Van de Graaff accelerator was employed to assess in vivo elemental content of aluminum and manganese in human hands. These measurements were carried out to monitor the long-term exposure of these potentially toxic trace elements through hand bone levels. The dose equivalent delivered to a patient during irradiation procedure is the limiting factor for IVNAA measurements. This article describes a method to estimate the average radiation dose equivalent delivered to the patient's hand during irradiation. The computational method described in this work augments the dose measurements carried out earlier [Arnold et al., 2002. Med. Phys. 29(11), 2718-2724]. This method employs the Monte Carlo simulation of hand irradiation facility using MCNP4B. Based on the estimated dose equivalents received by the patient hand, the proposed irradiation procedure for the IVNAA measurement of manganese in human hands [Arnold et al., 2002. Med. Phys. 29(11), 2718-2724] with normal (1 ppm) and elevated manganese content can be carried out with a reasonably low dose of 31 mSv to the hand. Sixty-three percent of the total dose equivalent is delivered by non-useful fast group (>10 keV); the filtration of this neutron group from the beam will further decrease the dose equivalent to the patient's hand
Energy Technology Data Exchange (ETDEWEB)
Czarnecki, D; Voigts-Rhetz, P von; Shishechian, D Uchimura [Technische Hochschule Mittelhessen - University of Applied Sciences, Giessen (Germany); Zink, K [Technische Hochschule Mittelhessen - University of Applied Sciences, Giessen (Germany); Germany and Department of Radiotherapy and Radiooncology, University Medical Center Giessen-Marburg, Marburg (Germany)
2015-06-15
Purpose: Developing a fast and accurate calculation model to reconstruct the applied photon fluence from an external photon radiation therapy treatment based on an image recorded by an electronic portal image device (EPID). Methods: To reconstruct the initial photon fluence the 2D EPID image was corrected for scatter from the patient/phantom and EPID to generate the transmitted primary photon fluence. This was done by an iterative deconvolution using precalculated point spread functions (PSF). The transmitted primary photon fluence was then backprojected through the patient/phantom geometry considering linear attenuation to receive the initial photon fluence applied for the treatment.The calculation model was verified using Monte Carlo simulations performed with the EGSnrc code system. EPID images were produced by calculating the dose deposition in the EPID from a 6 MV photon beam irradiating a water phantom with air and bone inhomogeneities and the ICRP anthropomorphic voxel phantom. Results: The initial photon fluence was reconstructed using a single PSF and position dependent PSFs which depend on the radiological thickness of the irradiated object. Appling position dependent point spread functions the mean uncertainty of the reconstructed initial photon fluence could be reduced from 1.13 % to 0.13 %. Conclusion: This study presents a calculation model for fluence reconstruction from EPID images. The{sup Result} show a clear advantage when position dependent PSF are used for the iterative reconstruction. The basic work of a reconstruction method was established and further evaluations must be made in an experimental study.
Saghamanesh, S.; Aghamiri, S. M.; Kamali-Asl, A.; Yashiro, W.
2017-09-01
An important challenge in real-world biomedical applications of x-ray phase contrast imaging (XPCI) techniques is the efficient use of the photon flux generated by an incoherent and polychromatic x-ray source. This efficiency can directly influence dose and exposure time and ideally should not affect the superior contrast and sensitivity of XPCI. In this paper, we present a quantitative evaluation of the photon detection efficiency of two laboratory-based XPCI methods, grating interferometry (GI) and coded-aperture (CA). We adopt a Monte Carlo approach to simulate existing prototypes of those systems, tailored for mammography applications. Our simulations were validated by means of a simple experiment performed on a CA XPCI system. Our results show that the fraction of detected photons in the standard energy range of mammography are about 1.4% and 10% for the GI and CA techniques, respectively. The simulations indicate that the design of the optical components plays an important role in the higher efficiency of CA compared to the GI method. It is shown that the use of lower absorbing materials as the substrates for GI gratings can improve its flux efficiency by up to four times. Along similar lines, we also show that an optimized and compact configuration of GI could lead to a 3.5 times higher fraction of detected counts compared to a standard and non-optimised GI implementation.
International Nuclear Information System (INIS)
Arreola V, G.; Vazquez R, R.; Guzman A, J. R.
2012-10-01
In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., μο=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)
International Nuclear Information System (INIS)
Yorulmaz, N.; Bozkurt, A.
2009-01-01
In nuclear medicine applications, the aim is to obtain diagnostic information about the organs and tissues of the patient with the help of some radiopharmaceuticals administered to him/her. Because some organs of the patient other than those under investigation will also be exposed to the radiation, it is important for radiation risk assessment to know how much radiation is received by the vital or radio-sensitive organs or tissues. In this study, an image-based body model created from the realistic images of a human is used together with the Monte Carlo code MCNP to compute the radiation doses absorbed by organs and tissues for some nuclear medicine procedures at gamma energies of 0.01, 0.015, 0.02, 0.03, 0.05, 0.1, 0.2, 0.5 and 1 MeV. Later, these values are used in conjunction with radiation weighting factors and organ weighting factors to estimate the effective dose for each diagnostic application.
Scalability tests of R-GMA based Grid job monitoring system for CMS Monte Carlo data production
Bonacorsi, D; Field, L; Fisher, S; Grandi, C; Hobson, P R; Kyberd, P; MacEvoy, B; Nebrensky, J J; Tallini, H; Traylen, S
2004-01-01
High Energy Physics experiments such as CMS (Compact Muon Solenoid) at the Large Hadron Collider have unprecedented, large-scale data processing computing requirements, with data accumulating at around 1 Gbyte/s. The Grid distributed computing paradigm has been chosen as the solution to provide the requisite computing power. The demanding nature of CMS software and computing requirements, such as the production of large quantities of Monte Carlo simulated data, makes them an ideal test case for the Grid and a major driver for the development of Grid technologies. One important challenge when using the Grid for large-scale data analysis is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. In this paper we report on the first measurements of R-GMA as part of a monitoring architecture to be used for b...
Babilas, Rafał; Łukowiec, Dariusz; Temleitner, Laszlo
2017-01-01
The structure of a multicomponent metallic glass, Mg 65 Cu 20 Y 10 Ni 5 , was investigated by the combined methods of neutron diffraction (ND), reverse Monte Carlo modeling (RMC) and high-resolution transmission electron microscopy (HRTEM). The RMC method, based on the results of ND measurements, was used to develop a realistic structure model of a quaternary alloy in a glassy state. The calculated model consists of a random packing structure of atoms in which some ordered regions can be indicated. The amorphous structure was also described by peak values of partial pair correlation functions and coordination numbers, which illustrated some types of cluster packing. The N = 9 clusters correspond to the tri-capped trigonal prisms, which are one of Bernal's canonical clusters, and atomic clusters with N = 6 and N = 12 are suitable for octahedral and icosahedral atomic configurations. The nanocrystalline character of the alloy after annealing was also studied by HRTEM. The selected HRTEM images of the nanocrystalline regions were also processed by inverse Fourier transform analysis. The high-angle annular dark-ﬁeld (HAADF) technique was used to determine phase separation in the studied glass after heat treatment. The HAADF mode allows for the observation of randomly distributed, dark contrast regions of about 4-6 nm. The interplanar spacing identified for the orthorhombic Mg 2 Cu crystalline phase is similar to the value of the first coordination shell radius from the short-range order.
International Nuclear Information System (INIS)
Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Hernandez-Bojorquez, M.; Galvan de la Cruz, O. O.; Ballesteros-Zebadua, P.
2010-01-01
This work presents the beam data commissioning and dose calculation validation of the first Monte Carlo (MC) based treatment planning system (TPS) installed in Mexico. According to the manufacturer specifications, the beam data commissioning needed for this model includes: several in-air and water profiles, depth dose curves, head-scatter factors and output factors (6x6, 12x12, 18x18, 24x24, 42x42, 60x60, 80x80 and 100x100 mm 2 ). Radiographic and radiochromic films, diode and ionization chambers were used for data acquisition. MC dose calculations in a water phantom were used to validate the MC simulations using comparisons with measured data. Gamma index criteria 2%/2 mm were used to evaluate the accuracy of MC calculations. MC calculated data show an excellent agreement for field sizes from 18x18 to 100x100 mm 2 . Gamma analysis shows that in average, 95% and 100% of the data passes the gamma index criteria for these fields, respectively. For smaller fields (12x12 and 6x6 mm 2 ) only 92% of the data meet the criteria. Total scatter factors show a good agreement ( 2 ) that show a error of 4.7%. MC dose calculations are accurate and precise for clinical treatment planning up to a field size of 18x18 mm 2 . Special care must be taken for smaller fields.
Energy Technology Data Exchange (ETDEWEB)
Brunetti, Antonio; Golosio, Bruno [Universita degli Studi di Sassari, Dipartimento di Scienze Politiche, Scienze della Comunicazione e Ingegneria dell' Informazione, Sassari (Italy); Melis, Maria Grazia [Universita degli Studi di Sassari, Dipartimento di Storia, Scienze dell' Uomo e della Formazione, Sassari (Italy); Mura, Stefania [Universita degli Studi di Sassari, Dipartimento di Agraria e Nucleo di Ricerca sulla Desertificazione, Sassari (Italy)
2014-11-08
X-ray fluorescence (XRF) is a well known nondestructive technique. It is also applied to multilayer characterization, due to its possibility of estimating both composition and thickness of the layers. Several kinds of cultural heritage samples can be considered as a complex multilayer, such as paintings or decorated objects or some types of metallic samples. Furthermore, they often have rough surfaces and this makes a precise determination of the structure and composition harder. The standard quantitative XRF approach does not take into account this aspect. In this paper, we propose a novel approach based on a combined use of X-ray measurements performed with a polychromatic beam and Monte Carlo simulations. All the information contained in an X-ray spectrum is used. This approach allows obtaining a very good estimation of the sample contents both in terms of chemical elements and material thickness, and in this sense, represents an improvement of the possibility of XRF measurements. Some examples will be examined and discussed. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Adams, M.B. [United States Dept. of Agriculture Forest Service, Parsons, WV (United States); Burger, J.A. [Virginia Tech University, Blacks Burg, VA (United States)
2010-07-01
This study assessed the hypothesis that soil based cation depletion is an effect of acidic deposition in forests located in the central Appalachians. The effects of experimentally induced base cation depletion were evaluated in relation to long-term soil productivity and the sustainability of forest stands. Whole-tree harvesting was conducted along with the removal of dead wood litter in order to remove all aboveground nutrients. Ammonium sulfate fertilizer was added at annual rates of 40.6 kg S/ha and 35.4 kg N/h in order to increase the leaching of calcium (Ca) and magnesium (Mg) from the soil. A randomized complete block design was used in 4 or 5 treatment applications in a mixed hardwood experimental forest located in West Virginia and in a cherry-maple forest located in a national forest in West Virginia. Soils were sampled over a 10-year period. The study showed that significant changes in soil Mg, N and some other nutrients occurred over time. However, biomass did not differ significantly among the different treatment options used.
Core physics design calculation of mini-type fast reactor based on Monte Carlo method
International Nuclear Information System (INIS)
He Keyu; Han Weishi
2007-01-01
An accurate physics calculation model has been set up for the mini-type sodium-cooled fast reactor (MFR) based on MCNP-4C code, then a detailed calculation of its critical physics characteristics, neutron flux distribution, power distribution and reactivity control has been carried out. The results indicate that the basic physics characteristics of MFR can satisfy the requirement and objectives of the core design. The power density and neutron flux distribution are symmetrical and reasonable. The control system is able to make a reliable reactivity balance efficiently and meets the request for long-playing operation. (authors)
Integrated Building Energy Design of a Danish Office Building Based on Monte Carlo Simulation Method
DEFF Research Database (Denmark)
Sørensen, Mathias Juul; Myhre, Sindre Hammer; Hansen, Kasper Kingo
2017-01-01
The focus on reducing buildings energy consumption is gradually increasing, and the optimization of a building’s performance and maximizing its potential leads to great challenges between architects and engineers. In this study, we collaborate with a group of architects on a design project of a new...... office building located in Aarhus, Denmark. Building geometry, floor plans and employee schedules were obtained from the architects which is the basis for this study. This study aims to simplify the iterative design process that is based on the traditional trial and error method in the late design phases...
International Nuclear Information System (INIS)
Perrot, Y.
2011-01-01
Radiation therapy treatment planning requires accurate determination of absorbed dose in the patient. Monte Carlo simulation is the most accurate method for solving the transport problem of particles in matter. This thesis is the first study dealing with the validation of the Monte Carlo simulation platform GATE (GEANT4 Application for Tomographic Emission), based on GEANT4 (Geometry And Tracking) libraries, for the computation of absorbed dose deposited by electron beams. This thesis aims at demonstrating that GATE/GEANT4 calculations are able to reach treatment planning requirements in situations where analytical algorithms are not satisfactory. The goal is to prove that GATE/GEANT4 is useful for treatment planning using electrons and competes with well validated Monte Carlo codes. This is demonstrated by the simulations with GATE/GEANT4 of realistic electron beams and electron sources used for external radiation therapy or targeted radiation therapy. The computed absorbed dose distributions are in agreement with experimental measurements and/or calculations from other Monte Carlo codes. Furthermore, guidelines are proposed to fix the physics parameters of the GATE/GEANT4 simulations in order to ensure the accuracy of absorbed dose calculations according to radiation therapy requirements. (author)
An automated Monte-Carlo based method for the calculation of cascade summing factors
Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.
2016-10-01
A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.
Addressing Ozone Layer Depletion
Access information on EPA's efforts to address ozone layer depletion through regulations, collaborations with stakeholders, international treaties, partnerships with the private sector, and enforcement actions under Title VI of the Clean Air Act.
International Nuclear Information System (INIS)
Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi
2014-01-01
The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm 3 ] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm 3 and was sandwiched in between 0.05×0.05×0.3 cm 3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm 3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×10 8 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular
Simulation of Ni-63 based nuclear micro battery using Monte Carlo modeling
International Nuclear Information System (INIS)
Kim, Tae Ho; Kim, Ji Hyun
2013-01-01
The radioisotope batteries have an energy density of 100-10000 times greater than chemical batteries. Also, Li ion battery has the fundamental problems such as short life time and requires recharge system. In addition to these things, the existing batteries are hard to operate at internal human body, national defense arms or space environment. Since the development of semiconductor process and materials technology, the micro device is much more integrated. It is expected that, based on new semiconductor technology, the conversion device efficiency of betavoltaic battery will be highly increased. Furthermore, the radioactivity from the beta particle cannot penetrate a skin of human body, so it is safer than Li battery which has the probability to explosion. In the other words, the interest for radioisotope battery is increased because it can be applicable to an artificial internal organ power source without recharge and replacement, micro sensor applied to arctic and special environment, small size military equipment and space industry. However, there is not enough data for beta particle fluence from radioisotope source using nuclear battery. Beta particle fluence directly influences on battery efficiency and it is seriously affected by radioisotope source thickness because of self-absorption effect. Therefore, in this article, we present a basic design of Ni-63 nuclear battery and simulation data of beta particle fluence with various thickness of radioisotope source and design of battery
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-09-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions.
Qin, Nan; Shen, Chenyang; Tsai, Min-Yu; Pinto, Marco; Tian, Zhen; Dedes, Georgios; Pompos, Arnold; Jiang, Steve B; Parodi, Katia; Jia, Xun
2018-01-01
One of the major benefits of carbon ion therapy is enhanced biological effectiveness at the Bragg peak region. For intensity modulated carbon ion therapy (IMCT), it is desirable to use Monte Carlo (MC) methods to compute the properties of each pencil beam spot for treatment planning, because of their accuracy in modeling physics processes and estimating biological effects. We previously developed goCMC, a graphics processing unit (GPU)-oriented MC engine for carbon ion therapy. The purpose of the present study was to build a biological treatment plan optimization system using goCMC. The repair-misrepair-fixation model was implemented to compute the spatial distribution of linear-quadratic model parameters for each spot. A treatment plan optimization module was developed to minimize the difference between the prescribed and actual biological effect. We used a gradient-based algorithm to solve the optimization problem. The system was embedded in the Varian Eclipse treatment planning system under a client-server architecture to achieve a user-friendly planning environment. We tested the system with a 1-dimensional homogeneous water case and 3 3-dimensional patient cases. Our system generated treatment plans with biological spread-out Bragg peaks covering the targeted regions and sparing critical structures. Using 4 NVidia GTX 1080 GPUs, the total computation time, including spot simulation, optimization, and final dose calculation, was 0.6 hour for the prostate case (8282 spots), 0.2 hour for the pancreas case (3795 spots), and 0.3 hour for the brain case (6724 spots). The computation time was dominated by MC spot simulation. We built a biological treatment plan optimization system for IMCT that performs simulations using a fast MC engine, goCMC. To the best of our knowledge, this is the first time that full MC-based IMCT inverse planning has been achieved in a clinically viable time frame. Copyright © 2017 Elsevier Inc. All rights reserved.
Use of Monte Carlo analysis in a risk-based prioritization of toxic constituents in house dust.
Ginsberg, Gary L; Belleggia, Giuliana
2017-12-01
Many chemicals have been detected in house dust with exposures to the general public and particularly young children of potential health concern. House dust is also an indicator of chemicals present in consumer products and the built environment that may constitute a health risk. The current analysis compiles a database of recent house dust concentrations from the United States and Canada, focusing upon semi-volatile constituents. Seven constituents from the phthalate and flame retardant categories were selected for risk-based screening and prioritization: diethylhexyl phthalate (DEHP), butyl benzyl phthalate (BBzP), diisononyl phthalate (DINP), a pentabrominated diphenyl ether congener (BDE-99), hexabromocyclododecane (HBCDD), tris(1,3-dichloro-2-propyl) phosphate (TDCIPP) and tris(2-chloroethyl) phosphate (TCEP). Monte Carlo analysis was used to represent the variability in house dust concentration as well as the uncertainty in the toxicology database in the estimation of children's exposure and risk. Constituents were prioritized based upon the percentage of the distribution of risk results for cancer and non-cancer endpoints that exceeded a hazard quotient (HQ) of 1. The greatest percent HQ exceedances were for DEHP (cancer and non-cancer), BDE-99 (non-cancer) and TDCIPP (cancer). Current uses and the potential for reducing levels of these constituents in house dust are discussed. Exposure and risk for other phthalates and flame retardants in house dust may increase if they are used to substitute for these prioritized constituents. Therefore, alternative assessment and green chemistry solutions are important elements in decreasing children's exposure to chemicals of concern in the indoor environment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Monte Carlo simulation of moderator and reflector in coal analyzer based on a D-T neutron generator.
Shan, Qing; Chu, Shengnan; Jia, Wenbao
2015-11-01
Coal is one of the most popular fuels in the world. The use of coal not only produces carbon dioxide, but also contributes to the environmental pollution by heavy metals. In prompt gamma-ray neutron activation analysis (PGNAA)-based coal analyzer, the characteristic gamma rays of C and O are mainly induced by fast neutrons, whereas thermal neutrons can be used to induce the characteristic gamma rays of H, Si, and heavy metals. Therefore, appropriate thermal and fast neutrons are beneficial in improving the measurement accuracy of heavy metals, and ensure that the measurement accuracy of main elements meets the requirements of the industry. Once the required yield of the deuterium-tritium (d-T) neutron generator is determined, appropriate thermal and fast neutrons can be obtained by optimizing the neutron source term. In this article, the Monte Carlo N-Particle (MCNP) Transport Code and Evaluated Nuclear Data File (ENDF) database are used to optimize the neutron source term in PGNAA-based coal analyzer, including the material and shape of the moderator and neutron reflector. The optimized targets include two points: (1) the ratio of the thermal to fast neutron is 1:1 and (2) the total neutron flux from the optimized neutron source in the sample increases at least 100% when compared with the initial one. The simulation results show that, the total neutron flux in the sample increases 102%, 102%, 85%, 72%, and 62% with Pb, Bi, Nb, W, and Be reflectors, respectively. Maximum optimization of the targets is achieved when the moderator is a 3-cm-thick lead layer coupled with a 3-cm-thick high-density polyethylene (HDPE) layer, and the neutron reflector is a 27-cm-thick hemispherical lead layer. Copyright © 2015 Elsevier Ltd. All rights reserved.
Image quality assessment of LaBr3-based whole-body 3D PET scanners: a Monte Carlo evaluation
International Nuclear Information System (INIS)
Surti, S; Karp, J S; Muehllehner, G
2004-01-01
The main thrust for this work is the investigation and design of a whole-body PET scanner based on new lanthanum bromide scintillators. We use Monte Carlo simulations to generate data for a 3D PET scanner based on LaBr 3 detectors, and to assess the count-rate capability and the reconstructed image quality of phantoms with hot and cold spheres using contrast and noise parameters. Previously we have shown that LaBr 3 has very high light output, excellent energy resolution and fast timing properties which can lead to the design of a time-of-flight (TOF) whole-body PET camera. The data presented here illustrate the performance of LaBr 3 without the additional benefit of TOF information, although our intention is to develop a scanner with TOF measurement capability. The only drawbacks of LaBr 3 are the lower stopping power and photo-fraction which affect both sensitivity and spatial resolution. However, in 3D PET imaging where energy resolution is very important for reducing scattered coincidences in the reconstructed image, the image quality attained in a non-TOF LaBr 3 scanner can potentially equal or surpass that achieved with other high sensitivity scanners. Our results show that there is a gain in NEC arising from the reduced scatter and random fractions in a LaBr 3 scanner. The reconstructed image resolution is slightly worse than a high-Z scintillator, but at increased count-rates, reduced pulse pileup leads to an image resolution similar to that of LSO. Image quality simulations predict reduced contrast for small hot spheres compared to an LSO scanner, but improved noise characteristics at similar clinical activity levels
Nguyen, Phuong Hoa; Hofmann, Karl R.; Paasch, Gernot
2003-07-01
In a previous article [J. Appl. Phys. 92, 5359 (2002)], we presented a combination of a full-band Monte Carlo method using an advanced band structure and a variable Brillouin zone discretization, with phonon scattering rates based on the screened pseudopotential considering the positions of the atoms in the elementary cell. To make the method suitable for sufficiently fast applications, such as device simulations, the simplest wave number dependent approximation was introduced. It contains an average of the cell structure factor, and only two fit parameters: The acoustic and the optical deformation potentials. As the pseudopotential, the Ashcroft model potential is chosen, and screening is taken into account using the Lindhard dielectric function. In the present article, based on the study of the influence of the two deformation potentials on the electron and hole drift velocities in Si and Ge, we show how to select the deformation potentials. Depending on the targeted agreement with experimental results, the pairs of deformation potentials for electrons and holes can be used uniformly for a wide temperature range or separately for different temperatures. For Ge, we achieve remarkable quantitative agreement with the temperature, field, and orientation dependencies of experimental electron and hole drift velocities in the wide temperature range from 77 to 300 K with a single set of the two deformations potentials for each carrier type. A detailed comparative simulation of the transport properties in Ge and Si at different temperatures is presented which is comprised of the steady-state dependence of the drift velocity on the electric field, the low-field mobility, and transient transport. Peculiarities of the drift velocity-field dependencies, such as the anisotropy, and a negative differential mobility are discussed in terms of the different band structures in connection with the field dependence of the simulated distribution functions. For doped materials, ionized
Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Luo, Qingming
2015-10-05
The excessive time required by fluorescence diffuse optical tomography (fDOT) image reconstruction based on path-history fluorescence Monte Carlo model is its primary limiting factor. Herein, we present a method that accelerates fDOT image reconstruction. We employ three-level parallel architecture including multiple nodes in cluster, multiple cores in central processing unit (CPU), and multiple streaming multiprocessors in graphics processing unit (GPU). Different GPU memories are selectively used, the data-writing time is effectively eliminated, and the data transport per iteration is minimized. Simulation experiments demonstrated that this method can utilize general-purpose computing platforms to efficiently implement and accelerate fDOT image reconstruction, thus providing a practical means of using path-history-based fluorescence Monte Carlo model for fDOT imaging.
Energy Technology Data Exchange (ETDEWEB)
Kim, Kyungsang; Ye, Jong Chul, E-mail: jong.ye@kaist.ac.kr [Bio Imaging and Signal Processing Laboratory, Department of Bio and Brain Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Lee, Taewon; Cho, Seungryong [Medical Imaging and Radiotherapeutics Laboratory, Department of Nuclear and Quantum Engineering, KAIST 291, Daehak-ro, Yuseong-gu, Daejeon 34141 (Korea, Republic of); Seong, Younghun; Lee, Jongha; Jang, Kwang Eun [Samsung Advanced Institute of Technology, Samsung Electronics, 130, Samsung-ro, Yeongtong-gu, Suwon-si, Gyeonggi-do, 443-803 (Korea, Republic of); Choi, Jaegu; Choi, Young Wook [Korea Electrotechnology Research Institute (KERI), 111, Hanggaul-ro, Sangnok-gu, Ansan-si, Gyeonggi-do, 426-170 (Korea, Republic of); Kim, Hak Hee; Shin, Hee Jung; Cha, Joo Hee [Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro, 43-gil, Songpa-gu, Seoul, 138-736 (Korea, Republic of)
2015-09-15
Purpose: In digital breast tomosynthesis (DBT), scatter correction is highly desirable, as it improves image quality at low doses. Because the DBT detector panel is typically stationary during the source rotation, antiscatter grids are not generally compatible with DBT; thus, a software-based scatter correction is required. This work proposes a fully iterative scatter correction method that uses a novel fast Monte Carlo simulation (MCS) with a tissue-composition ratio estimation technique for DBT imaging. Methods: To apply MCS to scatter estimation, the material composition in each voxel should be known. To overcome the lack of prior accurate knowledge of tissue composition for DBT, a tissue-composition ratio is estimated based on the observation that the breast tissues are principally composed of adipose and glandular tissues. Using this approximation, the composition ratio can be estimated from the reconstructed attenuation coefficients, and the scatter distribution can then be estimated by MCS using the composition ratio. The scatter estimation and image reconstruction procedures can be performed iteratively until an acceptable accuracy is achieved. For practical use, (i) the authors have implemented a fast MCS using a graphics processing unit (GPU), (ii) the MCS is simplified to transport only x-rays in the energy range of 10–50 keV, modeling Rayleigh and Compton scattering and the photoelectric effect using the tissue-composition ratio of adipose and glandular tissues, and (iii) downsampling is used because the scatter distribution varies rather smoothly. Results: The authors have demonstrated that the proposed method can accurately estimate the scatter distribution, and that the contrast-to-noise ratio of the final reconstructed image is significantly improved. The authors validated the performance of the MCS by changing the tissue thickness, composition ratio, and x-ray energy. The authors confirmed that the tissue-composition ratio estimation was quite
International Nuclear Information System (INIS)
Vithayasrichareon, Peerapat; MacGill, Iain F.
2012-01-01
This paper presents a novel decision-support tool for assessing future generation portfolios in an increasingly uncertain electricity industry. The tool combines optimal generation mix concepts with Monte Carlo simulation and portfolio analysis techniques to determine expected overall industry costs, associated cost uncertainty, and expected CO 2 emissions for different generation portfolio mixes. The tool can incorporate complex and correlated probability distributions for estimated future fossil-fuel costs, carbon prices, plant investment costs, and demand, including price elasticity impacts. The intent of this tool is to facilitate risk-weighted generation investment and associated policy decision-making given uncertainties facing the electricity industry. Applications of this tool are demonstrated through a case study of an electricity industry with coal, CCGT, and OCGT facing future uncertainties. Results highlight some significant generation investment challenges, including the impacts of uncertain and correlated carbon and fossil-fuel prices, the role of future demand changes in response to electricity prices, and the impact of construction cost uncertainties on capital intensive generation. The tool can incorporate virtually any type of input probability distribution, and support sophisticated risk assessments of different portfolios, including downside economic risks. It can also assess portfolios against multi-criterion objectives such as greenhouse emissions as well as overall industry costs. - Highlights: ► Present a decision support tool to assist generation investment and policy making under uncertainty. ► Generation portfolios are assessed based on their expected costs, risks, and CO 2 emissions. ► There is tradeoff among expected cost, risks, and CO 2 emissions of generation portfolios. ► Investment challenges include economic impact of uncertainties and the effect of price elasticity. ► CO 2 emissions reduction depends on the mix of
Monte Carlo-based evaluation of S-values in mouse models for positron-emitting radionuclides
Xie, Tianwu; Zaidi, Habib
2013-01-01
In addition to being a powerful clinical tool, Positron emission tomography (PET) is also used in small laboratory animal research to visualize and track certain molecular processes associated with diseases such as cancer, heart disease and neurological disorders in living small animal models of disease. However, dosimetric characteristics in small animal PET imaging are usually overlooked, though the radiation dose may not be negligible. In this work, we constructed 17 mouse models of different body mass and size based on the realistic four-dimensional MOBY mouse model. Particle (photons, electrons and positrons) transport using the Monte Carlo method was performed to calculate the absorbed fractions and S-values for eight positron-emitting radionuclides (C-11, N-13, O-15, F-18, Cu-64, Ga-68, Y-86 and I-124). Among these radionuclides, O-15 emits positrons with high energy and frequency and produces the highest self-absorbed S-values in each organ, while Y-86 emits γ-rays with high energy and frequency which results in the highest cross-absorbed S-values for non-neighbouring organs. Differences between S-values for self-irradiated organs were between 2% and 3%/g difference in body weight for most organs. For organs irradiating other organs outside the splanchnocoele (i.e. brain, testis and bladder), differences between S-values were lower than 1%/g. These appealing results can be used to assess variations in small animal dosimetry as a function of total-body mass. The generated database of S-values for various radionuclides can be used in the assessment of radiation dose to mice from different radiotracers in small animal PET experiments, thus offering quantitative figures for comparative dosimetry research in small animal models.
Barbeiro, A R; Ureba, A; Baeza, J A; Linares, R; Perucha, M; Jiménez-Ortega, E; Velázquez, S; Mateos, J C; Leal, A
2016-01-01
A model based on a specific phantom, called QuAArC, has been designed for the evaluation of planning and verification systems of complex radiotherapy treatments, such as volumetric modulated arc therapy (VMAT). This model uses the high accuracy provided by the Monte Carlo (MC) simulation of log files and allows the experimental feedback from the high spatial resolution of films hosted in QuAArC. This cylindrical phantom was specifically designed to host films rolled at different radial distances able to take into account the entrance fluence and the 3D dose distribution. Ionization chamber measurements are also included in the feedback process for absolute dose considerations. In this way, automated MC simulation of treatment log files is implemented to calculate the actual delivery geometries, while the monitor units are experimentally adjusted to reconstruct the dose-volume histogram (DVH) on the patient CT. Prostate and head and neck clinical cases, previously planned with Monaco and Pinnacle treatment planning systems and verified with two different commercial systems (Delta4 and COMPASS), were selected in order to test operational feasibility of the proposed model. The proper operation of the feedback procedure was proved through the achieved high agreement between reconstructed dose distributions and the film measurements (global gamma passing rates > 90% for the 2%/2 mm criteria). The necessary discretization level of the log file for dose calculation and the potential mismatching between calculated control points and detection grid in the verification process were discussed. Besides the effect of dose calculation accuracy of the analytic algorithm implemented in treatment planning systems for a dynamic technique, it was discussed the importance of the detection density level and its location in VMAT specific phantom to obtain a more reliable DVH in the patient CT. The proposed model also showed enough robustness and efficiency to be considered as a pre
A Comparative Depletion Analysis using MCNP6 and REBUS-3 for Advanced SFR Burner Core
Energy Technology Data Exchange (ETDEWEB)
You, Wu Seung; Hong, Ser Gi [Kyung Hee University, Yongin (Korea, Republic of)
2016-05-15
In this paper, we evaluated the accuracy of fast reactor design codes by comparing with MCNP6-based Monte Carlo simulation and REBUS-3-based the nodal transport theory for an initial cycle of an advanced uranium-free fueled SFR burner core having large heterogeneities. It was shown that the nodal diffusion calculation in REBUS-3 gave a large difference in initial k-effective value by 2132pcm when compared with MCNP6 depletion calculation using heterogeneous model.The code system validation for fast reactor design is one of the important research topics. In our previous studies, depletion analysis and physics parameter evaluation of fast reactor core were done with REBUS-3 code and DIF3D code, respectively. In particular, the depletion analysis was done with lumped fission products. However, it is need to verify the accuracy of these calculation methodologies by using Monte Carlo neutron transport calculation coupled with explicit treatment of fission products. In this study, the accuracy of fast reactor design codes and procedures were evaluated using MCNP6 code and VARIANT nodal transport calculation for an initial cycle of an advanced sodium-cooled burner core loaded with uranium-free fuels. It was considered that the REBUS-3 nodal diffusion option can not be used to accurately estimate the depletion calculations and VARIANT nodal transport or VARIANT SP3 options are required for this purpose for this kind of heterogeneous burner core loaded with uranium-free fuel. The control rod worths with nodal diffusion and transport options were estimated with discrepancies less than 12% while these methods for sodium void worth at BOC gave large discrepancies of 12.2% and 16.9%, respectively. It is considered that these large discrepancies in sodium void worth are resulted from the inaccurate consideration of spectrum change in multi-group cross section.
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-10-07
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
Wan Chan Tseung, H; Ma, J; Beltran, C
2015-06-01
Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on graphics processing units (GPUs). However, these MCs usually use simplified models for nonelastic proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and nonelastic proton-nucleus collisions. Using the cuda framework, the authors implemented GPU kernels for the following tasks: (1) simulation of beam spots from our possible scanning nozzle configurations, (2) proton propagation through CT geometry, taking into account nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) modeling of the intranuclear cascade stage of nonelastic interactions when they occur, (4) simulation of nuclear evaporation, and (5) statistical error estimates on the dose. To validate our MC, the authors performed (1) secondary particle yield calculations in proton collisions with therapeutically relevant nuclei, (2) dose calculations in homogeneous phantoms, (3) recalculations of complex head and neck treatment plans from a commercially available treatment planning system, and compared with (GEANT)4.9.6p2/TOPAS. Yields, energy, and angular distributions of secondaries from nonelastic collisions on various nuclei are in good agreement with the (GEANT)4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%-2 mm for treatment plan simulations is typically 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is ∼ 20 s for 1 × 10(7) proton histories. Our GPU-based MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus. Dosimetric calculations are in very good agreement with (GEANT)4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans, and is being used as the dose calculation engine in a clinically
Al-Subeihi, A.A.; Alhusainy, W.; Kiwamoto, R.; Spenkelink, A.; Bladeren, van P.J.; Rietjens, I.M.C.M.; Punt, A.
2015-01-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1'-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the
Análisis de uso de las bases de datos de la biblioteca de la Universidad Carlos III de Madrid
Directory of Open Access Journals (Sweden)
Suárez Balseiro, Carlos
2001-03-01
Full Text Available In this paper we identify and show, through the application of statistical techniques, unidimensional and multidimensional indicators, some of the characteristics revealed by the users community of the Madrid Carlos III University in their use of databases. The information for this study has been obtained from the analysis of the accesses made in the Access to Databases Service of the Carlos III University library, paying a special attention to the behaviour of the teaching departments during the 1995-1998 period. The evolution in the use of on-line databases is valued and the indicators, which typify the trends and patterns of use from the part of those users, are analysed. Some of the criteria highlighted by the interaction between the user, new technologies and electronic sources of information in university libraries.
En este trabajo se identifican y muestran, mediante la aplicación de técnicas estadísticas e indicadores unidimensionales y multidimensionales, algunas de las características manifestadas por la comunidad de usuarios de la Universidad Carlos III de Madrid en el uso de las bases de datos. La información para el estudio se ha obtenido a partir del análisis de los accesos realizados en el Servicio de Acceso a Bases de Datos de la Biblioteca de la Universidad Carlos III, prestando especial atención al comportamiento de los departamentos docentes durante el período 1995 a 1998. Se valora la evolución del uso de las bases de datos en red y se analizan los indicadores que tipifican las tendencias y patrones de uso por parte de estos usuarios. Se exponen, además, algunos criterios sobre la interacción entre el usuario, las nuevas tecnologías y las fuentes electrónicas de información en las bibliotecas universitarias.
International Nuclear Information System (INIS)
Becker, N.M.; Vanta, E.B.
1995-01-01
Hydrologic investigations on depleted uranium fate and transport associated with dynamic testing activities were instituted in the 1980's at Los Alamos National Laboratory and Eglin Air Force Base. At Los Alamos, extensive field watershed investigations of soil, sediment, and especially runoff water were conducted. Eglin conducted field investigations and runoff studies similar to those at Los Alamos at former and active test ranges. Laboratory experiments complemented the field investigations at both installations. Mass balance calculations were performed to quantify the mass of expended uranium which had transported away from firing sites. At Los Alamos, it is estimated that more than 90 percent of the uranium still remains in close proximity to firing sites, which has been corroborated by independent calculations. At Eglin, we estimate that 90 to 95 percent of the uranium remains at test ranges. These data demonstrate that uranium moves slowly via surface water, in both semi-arid (Los Alamos) and humid (Eglin) environments
Energy Technology Data Exchange (ETDEWEB)
Becker, N.M. [Los Alamos National Lab., NM (United States); Vanta, E.B. [Wright Laboratory Armament Directorate, Eglin Air Force Base, FL (United States)
1995-05-01
Hydrologic investigations on depleted uranium fate and transport associated with dynamic testing activities were instituted in the 1980`s at Los Alamos National Laboratory and Eglin Air Force Base. At Los Alamos, extensive field watershed investigations of soil, sediment, and especially runoff water were conducted. Eglin conducted field investigations and runoff studies similar to those at Los Alamos at former and active test ranges. Laboratory experiments complemented the field investigations at both installations. Mass balance calculations were performed to quantify the mass of expended uranium which had transported away from firing sites. At Los Alamos, it is estimated that more than 90 percent of the uranium still remains in close proximity to firing sites, which has been corroborated by independent calculations. At Eglin, we estimate that 90 to 95 percent of the uranium remains at test ranges. These data demonstrate that uranium moves slowly via surface water, in both semi-arid (Los Alamos) and humid (Eglin) environments.
International Nuclear Information System (INIS)
Yang, Ching-Ching; Chan, Kai-Chieh
2013-06-01
-Small animal PET allows qualitative assessment and quantitative measurement of biochemical processes in vivo, but the accuracy and reproducibility of imaging results can be affected by several parameters. The first aim of this study was to investigate the performance of different CT-based attenuation correction strategies and assess the resulting impact on PET images. The absorbed dose in different tissues caused by scanning procedures was also discussed to minimize biologic damage generated by radiation exposure due to PET/CT scanning. A small animal PET/CT system was modeled based on Monte Carlo simulation to generate imaging results and dose distribution. Three energy mapping methods, including the bilinear scaling method, the dual-energy method and the hybrid method which combines the kVp conversion and the dual-energy method, were investigated comparatively through assessing the accuracy of estimating linear attenuation coefficient at 511 keV and the bias introduced into PET quantification results due to CT-based attenuation correction. Our results showed that the hybrid method outperformed the bilinear scaling method, while the dual-energy method achieved the highest accuracy among the three energy mapping methods. Overall, the accuracy of PET quantification results have similar trend as that for the estimation of linear attenuation coefficients, whereas the differences between the three methods are more obvious in the estimation of linear attenuation coefficients than in the PET quantification results. With regards to radiation exposure from CT, the absorbed dose ranged between 7.29-45.58 mGy for 50-kVp scan and between 6.61-39.28 mGy for 80-kVp scan. For 18 F radioactivity concentration of 1.86x10 5 Bq/ml, the PET absorbed dose was around 24 cGy for tumor with a target-to-background ratio of 8. The radiation levels for CT scans are not lethal to the animal, but concurrent use of PET in longitudinal study can increase the risk of biological effects. The
Energy Technology Data Exchange (ETDEWEB)
Hunt, J.G. [Institute of Radiation Protection and Dosimetry, Av. Salvador Allende s/n, Recreio, Rio de Janeiro, CEP 22780-160 (Brazil); Watchman, C.J. [Department of Radiation Oncology, University of Arizona, Tucson, AZ, 85721 (United States); Bolch, W.E. [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL, 32611 (United States); Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611 (United States)
2007-07-01
Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D micro-CT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo-VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques. (authors)
International Nuclear Information System (INIS)
Verma, Amit K.; Anilkumar, S.; Narayani, K.; Babu, D.A.R.; Sharma, D.N.
2012-01-01
The gamma ray spectrometry technique is commonly used for the assessment of radioactivity in environmental matrices like water, soil, vegetation etc. The detector system used for gamma ray spectrometer should be calibrated for each geometry considered and for different gamma energies. It is very difficult to have radionuclide standards to cover all photon energies and also not feasible to make standard geometries for common applications. So there is a need to develop some computational techniques to determine absolute efficiencies of these detectors for practical geometries and for energies of common radioactive sources. A Monte Carlo based simulation method is proposed to study the response of the detector for various energies and geometries. From the simulated spectrum it is possible to calculate the efficiency of the gamma energy for the particular geometry modeled. The efficiency calculated by this method has to be validated experimentally using standard sources in laboratory conditions for selected geometries. For the present work simulation studies were under taken for the 3″ x 3″ NaI(Tl) detector based gamma spectrometry system set up in our laboratory. In order to see the effectiveness of the method for the low level radioactivity measurement it is planned to use low active standard source of 40 K in cylindrical container geometry for our work. Suitable detector and geometry model was developed using KCI standard of the same size, composition, density and radionuclide contents. Simulation data generated was compared with the experimental spectral data taken for a counting period of 20000 sec and 50000 sec. The peak areas obtained from the simulated spectra were compared with that of experimental spectral data. It was observed that the count rate (9.03 cps) in the simulated peak area is in agreement with that of the experimental peak area count rates (8.44 cps and 8.4 cps). The efficiency of the detector calculated by this method offers an alternative
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
Energy Technology Data Exchange (ETDEWEB)
Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Bai, T [UT Southwestern Medical Center, Dallas, TX (United States); Xi' an Jiaotong University, Xi' an (China); Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research
Nguyen, Phuong Hoa; Hofmann, Karl R.; Paasch, Gernot
2002-11-01
In advanced full-band Monte Carlo (MC) models, the Nordheim approximation with a spherical Wigner-Seitz cell for a lattice with two atoms per elementary cell is still common, and in the most detailed work on silicon by Kunikiyo [et al.] [J. Appl. Phys. 74, 297 (1994)], the atomic positions in the cell have been incorrectly introduced in the phonon scattering rates. In this article the correct expressions for the phonon scattering rates based on the screened pseudopotential are formulated for the case of several atoms per unit cell. Furthermore, the simplest wave number dependent approximation is introduced, which contains an average of the cell structure factor and the acoustic and the optical deformation potentials as two parameters to be fitted. While the band structure is determined by the pseudopotential at the reciprocal lattice vectors, the phonon scattering rates are essentially determined by wave numbers below the smallest reciprocal lattice vector. Thus, in the phonon scattering rates, the pseudopotential form factor is modeled by the simple Ashcroft model potential, in contrast to the full band structure, which is calculated using a nonlocal pseudopotential scheme. The parameter in the Ashcroft model potential is determined using a method based on the equilibrium condition. For the screening of the pseudopotential form factor, the Lindhard dielectric function is used. Compared to the Nordheim approximation with a spherical Wigner-Seitz cell, the approximation results in up to 10% lower phonon scattering rates. Examples from a detailed comparison of the influence of the two deformation potentials on the electron and hole drift velocities are presented for Ge and Si at different temperatures. The results are prerequisite for a well-founded choice of the two deformation potentials as fit parameters and they provide an explanation of the differences between the two materials, the origin of the anisotropy of the drift velocities, and the origin of the dent in
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Brake, te B.; Hanssen, R.F.; Ploeg, van der M.J.; Rooij, de G.H.
2013-01-01
Satellite-based radar interferometry is a technique capable of measuring small surface elevation changes at large scales and with a high resolution. In vadose zone hydrology, it has been recognized for a long time that surface elevation changes due to swell and shrinkage of clayey soils can serve as
Abraham, Reggie
2018-04-01
This article revisits Donald Capps's book The Depleted Self (The depleted self: sin in a narcissistic age. Fortress Press, Minneapolis, 1993), which grew out of his 1990 Schaff Lectures at Pittsburgh Theological Seminary. In these lectures Capps proposed that the theology of guilt had dominated much of post-Reformation discourse. But with the growing prevalence of the narcissistic personality in the late twentieth century, the theology of guilt no longer adequately expressed humanity's sense of "wrongness" before God. Late twentieth-century persons sense this disjunction between God and self through shame dynamics. Narcissists are not "full" of themselves, as popular perspectives might indicate. Instead, they are empty, depleted selves. Psychologists suggest this stems from lack of emotional stimulation and the absence of mirroring in the early stages of life. The narcissist's search for attention and affirmation takes craving, paranoid, manipulative, or phallic forms and is essentially a desperate attempt to fill the internal emptiness. Capps suggests that two narratives from the Gospels are helpful here: the story of the woman with the alabaster jar and the story of Jesus's dialogue with Mary and John at Calvary. These stories provide us with clues as to how depleted selves experienced mirroring and the potential for internal peace in community with Jesus.
Coldiron, B M
1996-03-01
Stratospheric ozone depletion due to chlorofluorocarbons an d increased ultraviolet radiation penetration has long been predicted. To determine if predictions of ozone depletion are correct and, if so, the significance of this depletion. Review of the English literature regarding ozone depletion and solar ultraviolet radiation. The ozone layer is showing definite thinning. Recently, significantly increased ultraviolet radiation transmission has been detected at ground level at several metering stations. It appears that man-made aerosols (air pollution) block increased UVB transmission in urban areas. Recent satellite measurements of stratospheric fluorine levels more directly implicate chlorofluorocarbons as a major source of catalytic stratospheric chlorine, although natural sources may account for up to 40% of stratospheric chlorine. Stratospheric chlorine concentrations, and resultant increased ozone destruction, will be enhanced for at least the next 70 years. The potential for increased transmission of ultraviolet radiation will exist for the next several hundred years. While little damage due to increased ultraviolet radiation has occurred so far, the potential for long-term problems is great.
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
ias
RESONANCE ⎜ August 2014. GENERAL ⎜ ARTICLE. Variational Monte Carlo Technique. Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. Keywords. Variational methods, Monte. Carlo techniques, harmonic os- cillators, quantum mechanical systems. Sukanta Deb is an. Assistant Professor in the.
Indian Academy of Sciences (India)
. Keywords. Gibbs sampling, Markov Chain. Monte Carlo, Bayesian inference, stationary distribution, conver- gence, image restoration. Arnab Chakraborty. We describe the mathematics behind the Markov. Chain Monte Carlo method of ...
Ngaile, J. E.; Msaki, P. K.; Kazema, R. R.
2018-04-01
Contrast investigations of hysterosalpingography (HSG) and retrograde urethrography (RUG) fluoroscopy procedures remain the dominant diagnostic tools for the investigation of infertility in females and urethral strictures in males, respectively, owing to the scarcity and high cost of services of alternative diagnostic technologies. In light of the radiological risks associated with contrast based investigations of the genitourinary tract systems, there is a need to assess the magnitude of radiation burden imparted to patients undergoing HSG and RUG fluoroscopy procedures in Tanzania. The air kerma area product (KAP), fluoroscopy time, number of images, organ dose and effective dose to patients undergoing HSG and RUG procedures were obtained from four hospitals. The KAP was measured using a flat transmission ionization chamber, while the organ and effective doses were estimated using the knowledge of the patient characteristics, patient related exposure parameters, geometry of examination, KAP and Monte Carlo calculations (PCXMC). The median values of KAP for the HSG and RUG were 2.2 Gy cm2 and 3.3 Gy cm2, respectively. The median organ doses in the present study for the ovaries, urinary bladder and uterus for the HSG procedures, were 1.0 mGy, 4.0 mGy and 1.6 mGy, respectively, while for urinary bladder and testes of the RUG were 3.4 mGy and 5.9 mGy, respectively. The median values of effective doses for the HSG and RUG procedures were 0.65 mSv and 0.59 mSv, respectively. The median values of effective dose per hospital for the HSG and RUG procedures had a range of 1.6-2.8 mSv and 1.9-5.6 mSv, respectively, while the overall differences between individual effective doses across the four hospitals varied by factors of up to 22.0 and 46.7, respectively for the HSG and RUG procedures. The proposed diagnostic reference levels (DRLs) for the HSG and RUG were for KAP 2.8 Gy cm2 and 3.9 Gy cm2, for fluoroscopy time 0.8 min and 0.9 min, and for number of images 5 and 4
Cassola, V. F.; Kramer, R.; Brayner, C.; Khoury, H. J.
2010-08-01
Does the posture of a patient have an effect on the organ and tissue absorbed doses caused by x-ray examinations? This study aims to find the answer to this question, based on Monte Carlo (MC) simulations of commonly performed x-ray examinations using adult phantoms modelled to represent humans in standing as well as in the supine posture. The recently published FASH (female adult mesh) and MASH (male adult mesh) phantoms have the standing posture. In a first step, both phantoms were updated with respect to their anatomy: glandular tissue was separated from adipose tissue in the breasts, visceral fat was separated from subcutaneous fat, cartilage was segmented in ears, nose and around the thyroid, and the mass of the right lung is now 15% greater than the left lung. The updated versions are called FASH2_sta and MASH2_sta (sta = standing). Taking into account the gravitational effects on organ position and fat distribution, supine versions of the FASH2 and the MASH2 phantoms have been developed in this study and called FASH2_sup and MASH2_sup. MC simulations of external whole-body exposure to monoenergetic photons and partial-body exposure to x-rays have been made with the standing and supine FASH2 and MASH2 phantoms. For external whole-body exposure for AP and PA projection with photon energies above 30 keV, the effective dose did not change by more than 5% when the posture changed from standing to supine or vice versa. Apart from that, the supine posture is quite rare in occupational radiation protection from whole-body exposure. However, in the x-ray diagnosis supine posture is frequently used for patients submitted to examinations. Changes of organ absorbed doses up to 60% were found for simulations of chest and abdomen radiographs if the posture changed from standing to supine or vice versa. A further increase of differences between posture-specific organ and tissue absorbed doses with increasing whole-body mass is to be expected.
Lee, Whanhee; Kim, Ho; Hwang, Sunghee; Zanobetti, Antonella; Schwartz, Joel D; Chung, Yeonseung
2017-09-07
Rich literature has reported that there exists a nonlinear association between temperature and mortality. One important feature in the temperature-mortality association is the minimum mortality temperature (MMT). The commonly used approach for estimating the MMT is to determine the MMT as the temperature at which mortality is minimized in the estimated temperature-mortality association curve. Also, an approximate bootstrap approach was proposed to calculate the standard errors and the confidence interval for the MMT. However, the statistical properties of these methods were not fully studied. Our research assessed the statistical properties of the previously proposed methods in various types of the temperature-mortality association. We also suggested an alternative approach to provide a point and an interval estimates for the MMT, which improve upon the previous approach if some prior knowledge is available on the MMT. We compare the previous and alternative methods through a simulation study and an application. In addition, as the MMT is often used as a reference temperature to calculate the cold- and heat-related relative risk (RR), we examined how the uncertainty in the MMT affects the estimation of the RRs. The previously proposed method of estimating the MMT as a point (indicated as Argmin2) may increase bias or mean squared error in some types of temperature-mortality association. The approximate bootstrap method to calculate the confidence interval (indicated as Empirical1) performs properly achieving near 95% coverage but the length can be unnecessarily extremely large in some types of the association. We showed that an alternative approach (indicated as Empirical2), which can be applied if some prior knowledge is available on the MMT, works better reducing the bias and the mean squared error in point estimation and achieving near 95% coverage while shortening the length of the interval estimates. The Monte Carlo simulation-based approach to estimate the
SU-E-T-416: Experimental Evaluation of a Commercial GPU-Based Monte Carlo Dose Calculation Algorithm
Energy Technology Data Exchange (ETDEWEB)
Paudel, M R; Beachey, D J; Sarfehnia, A; Sahgal, A; Keller, B [Sunnybrook Odette Cancer Center, Toronto, ON (Canada); University of Toronto, Department of Radiation Oncology, Toronto, ON (Canada); Kim, A; Ahmad, S [Sunnybrook Odette Cancer Center, Toronto, ON (Canada)
2015-06-15
Purpose: A new commercial GPU-based Monte Carlo dose calculation algorithm (GPUMCD) developed by the vendor Elekta™ to be used in the Monaco Treatment Planning System (TPS) is capable of modeling dose for both a standard linear accelerator and for an Elekta MRI-Linear accelerator (modeling magnetic field effects). We are evaluating this algorithm in two parts: commissioning the algorithm for an Elekta Agility linear accelerator (the focus of this work) and evaluating the algorithm’s ability to model magnetic field effects for an MRI-linear accelerator. Methods: A beam model was developed in the Monaco TPS (v.5.09.06) using the commissioned beam data for a 6MV Agility linac. A heterogeneous phantom representing tumor-in-lung, lung, bone-in-tissue, and prosthetic was designed/built. Dose calculations in Monaco were done using the current clinical algorithm (XVMC) and the new GPUMCD algorithm (1 mm3 voxel size, 0.5% statistical uncertainty) and in the Pinnacle TPS using the collapsed cone convolution (CCC) algorithm. These were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2×2 cm{sup 2}, 5×5 cm{sup 2}, and 10×10 cm{sup 2} field sizes. Results: The calculated central axis percentage depth doses (PDDs) in homogeneous solid water were within 2% compared to measurements for XVMC and GPUMCD. For tumor-in-lung and lung phantoms, doses calculated by all of the algorithms were within the experimental uncertainty of the measurements (±2% in the homogeneous phantom and ±3% for the tumor-in-lung or lung phantoms), except for 2×2 cm{sup 2} field size where only the CCC algorithm differs from film by 5% in the lung region. The analysis for bone-in-tissue and the prosthetic phantoms are ongoing. Conclusion: The new GPUMCD algorithm calculated dose comparable to both the XVMC algorithm and to measurements in both a homogeneous solid water medium and the heterogeneous phantom representing lung or tumor-in-lung for 2×2 cm
Mastrogiuseppe, M.; Hayes, A. G.; Poggiali, V.; Lunine, J. I.; Lorenz, R. D.; Seu, R.; Le Gall, A.; Notarnicola, C.; Mitchell, K. L.; Malaska, M.; Birch, S. P. D.
2018-01-01
Recently, the Cassini RADAR was used to sound hydrocarbon lakes and seas on Saturn's moon Titan. Since the initial discovery of echoes from the seabed of Ligeia Mare, the second largest liquid body on Titan, a dedicated radar processing chain has been developed to retrieve liquid depth and microwave absorptivity information from RADAR altimetry of Titan's lakes and seas. Herein, we apply this processing chain to altimetry data acquired over southern Ontario Lacus during Titan fly-by T49 in December 2008. The new signal processing chain adopts super resolution techniques and dedicated taper functions to reveal the presence of reflection from Ontario's lakebed. Unfortunately, the extracted waveforms from T49 are often distorted due to signal saturation, owing to the extraordinarily strong specular reflections from the smooth lake surface. This distortion is a function of the saturation level and can introduce artifacts, such as signal precursors, which complicate data interpretation. We use a radar altimetry simulator to retrieve information from the saturated bursts and determine the liquid depth and loss tangent of Ontario Lacus. Received waveforms are represented using a two-layer model, where Cassini raw radar data are simulated in order to reproduce the effects of receiver saturation. A Monte Carlo based approach along with a simulated waveform look-up table is used to retrieve parameters that are given as inputs to a parametric model which constrains radio absorption of Ontario Lacus and retrieves information about the dielectric properties of the liquid. We retrieve a maximum depth of 50 m along the radar transect and a best-fit specific attenuation of the liquid equal to 0.2 ± 0.09 dB m-1 that, when converted into loss tangent, gives tanδ = 7 ± 3 × 10-5. When combined with laboratory measured cryogenic liquid alkane dielectric properties and the variable solubility of nitrogen in ethane-methane mixtures, the best-fit loss tangent is consistent with a
Energy Technology Data Exchange (ETDEWEB)
Cassola, V F; Kramer, R; Brayner, C; Khoury, H J, E-mail: rkramer@uol.com.b [Department of Nuclear Energy, Federal University of Pernambuco, Avenida Prof. Luiz Freire, 1000, CEP 50740-540, Recife (Brazil)
2010-08-07
Does the posture of a patient have an effect on the organ and tissue absorbed doses caused by x-ray examinations? This study aims to find the answer to this question, based on Monte Carlo (MC) simulations of commonly performed x-ray examinations using adult phantoms modelled to represent humans in standing as well as in the supine posture. The recently published FASH (female adult mesh) and MASH (male adult mesh) phantoms have the standing posture. In a first step, both phantoms were updated with respect to their anatomy: glandular tissue was separated from adipose tissue in the breasts, visceral fat was separated from subcutaneous fat, cartilage was segmented in ears, nose and around the thyroid, and the mass of the right lung is now 15% greater than the left lung. The updated versions are called FASH2{sub s}ta and MASH2{sub s}ta (sta = standing). Taking into account the gravitational effects on organ position and fat distribution, supine versions of the FASH2 and the MASH2 phantoms have been developed in this study and called FASH2{sub s}up and MASH2{sub s}up. MC simulations of external whole-body exposure to monoenergetic photons and partial-body exposure to x-rays have been made with the standing and supine FASH2 and MASH2 phantoms. For external whole-body exposure for AP and PA projection with photon energies above 30 keV, the effective dose did not change by more than 5% when the posture changed from standing to supine or vice versa. Apart from that, the supine posture is quite rare in occupational radiation protection from whole-body exposure. However, in the x-ray diagnosis supine posture is frequently used for patients submitted to examinations. Changes of organ absorbed doses up to 60% were found for simulations of chest and abdomen radiographs if the posture changed from standing to supine or vice versa. A further increase of differences between posture-specific organ and tissue absorbed doses with increasing whole-body mass is to be expected.
Fully Depleted Charge-Coupled Devices
Energy Technology Data Exchange (ETDEWEB)
Holland, Stephen E.
2006-05-15
We have developed fully depleted, back-illuminated CCDs thatbuild upon earlier research and development efforts directed towardstechnology development of silicon-strip detectors used inhigh-energy-physics experiments. The CCDs are fabricated on the same typeof high-resistivity, float-zone-refined silicon that is used for stripdetectors. The use of high-resistivity substrates allows for thickdepletion regions, on the order of 200-300 um, with corresponding highdetection efficiency for near-infrared andsoft x-ray photons. We comparethe fully depleted CCD to thep-i-n diode upon which it is based, anddescribe the use of fully depleted CCDs in astronomical and x-ray imagingapplications.
Energy Technology Data Exchange (ETDEWEB)
Wuerl, Matthias
2016-08-01
Matthias Wuerl presents two essential steps to implement offline PET monitoring of proton dose delivery at a clinical facility, namely the setting up of an accurate Monte Carlo model of the clinical beamline and the experimental validation of positron emitter production cross-sections. In the first part, the field size dependence of the dose output is described for scanned proton beams. Both the Monte Carlo and an analytical computational beam model were able to accurately predict target dose, while the latter tends to overestimate dose in normal tissue. In the second part, the author presents PET measurements of different phantom materials, which were activated by the proton beam. The results indicate that for an irradiation with a high number of protons for the sake of good statistics, dead time losses of the PET scanner may become important and lead to an underestimation of positron-emitter production yields.
International Nuclear Information System (INIS)
Noh, Yeelyong; Chang, Kwangpil; Seo, Yutaek; Chang, Daejun
2014-01-01
This study proposes a new methodology that combines dynamic process simulation (DPS) and Monte Carlo simulation (MCS) to determine the design pressure of fuel storage tanks on LNG-fueled ships. Because the pressure of such tanks varies with time, DPS is employed to predict the pressure profile. Though equipment failure and subsequent repair affect transient pressure development, it is difficult to implement these features directly in the process simulation due to the randomness of the failure. To predict the pressure behavior realistically, MCS is combined with DPS. In MCS, discrete events are generated to create a lifetime scenario for a system. The combination of MCS with long-term DPS reveals the frequency of the exceedance pressure. The exceedance curve of the pressure provides risk-based information for determining the design pressure based on risk acceptance criteria, which may vary with different points of view. - Highlights: • The realistic operation scenario of the LNG FGS system is estimated by MCS. • In repeated MCS trials, the availability of the FGS system is evaluated. • The realistic pressure profile is obtained by the proposed methodology. • The exceedance curve provides risk-based information for determining design pressure
Monte Carlo simulation of ordering in fcc-based stoichiometric A3B and AB solid solutions
Czech Academy of Sciences Publication Activity Database
Buršík, Jiří
2002-01-01
Roč. 324, 1-2 (2002), s. 16-22 ISSN 0921-5093. [International Symposium on Plasticity of Metals and Alloys /8./. Praha, 04.09.2000-07.09.2000] R&D Projects: GA ČR GA106/99/1176 Institutional research plan: CEZ:AV0Z2041904 Keywords : short range order * pairwise interaction * Monte Carlo simulation Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.107, year: 2002
Energy Technology Data Exchange (ETDEWEB)
Del Nero, Renata Aline; Yoriyaz, Hélio [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Nakandakari, Marcos Vinicius Nakaoka, E-mail: hyoriyaz@ipen.br, E-mail: marcos.sake@gmail.com [Hospital Beneficência Portuguesa de São Paulo, SP (Brazil)
2017-07-01
The Monte Carlo method for radiation transport data has been adapted for medical physics application. More specifically, it has received more attention in clinical treatment planning with the development of more efficient computer simulation techniques. In linear accelerator modeling by the Monte Carlo method, the phase space data file (phsp) is used a lot. However, to obtain precision in the results, it is necessary detailed information about the accelerator's head and commonly the supplier does not provide all the necessary data. An alternative to the phsp is the Virtual Source Model (VSM). This alternative approach presents many advantages for the clinical Monte Carlo application. This is the most efficient method for particle generation and can provide an accuracy similar when the phsp is used. This research propose a VSM simulation with the use of a Virtual Flattening Filter (VFF) for profiles and percent deep doses calculation. Two different sizes of open fields (40 x 40 cm² and 40√2 x 40√2 cm²) were used and two different source to surface distance (SSD) were applied: the standard 100 cm and custom SSD of 370 cm, which is applied in radiotherapy treatments of total body irradiation. The data generated by the simulation was analyzed and compared with experimental data to validate the VSM. This current model is easy to build and test. (author)
International Nuclear Information System (INIS)
Kasselmann, S.; Scholthaus, S.; Rössel, C.; Allelein, H.-J.
2014-01-01
The problem of calculating the amounts of a coupled nuclide system varying with time especially when exposed to a neutron flux is a well-known problem and has been addressed by a number of computer codes. These codes cover a broad spectrum of applications, are based on comprehensive validation work and are therefore justifiably renowned among their users. However, due to their long development history, they are lacking a modern interface, which impedes a fast and robust internal coupling to other codes applied in the field of nuclear reactor physics. Therefore a project has been initiated to develop a new object-oriented nuclide transmutation code. It comprises an innovative solver based on graph theory, which exploits the topology of nuclide chains. This allows to always deal with the smallest nuclide system for the problem of interest. Highest priority has been given to the existence of a generic software interfaces well as an easy handling by making use of XML files for input and output. In this paper we report on the status of the code development and present first benchmark results, which prove the applicability of the selected approach. (author)
International Nuclear Information System (INIS)
Leege, P.F.A. de; Li, J.M.; Kloosterman, J.L.
1995-04-01
This users' guide gives a description of the functionality and the requested input of the SAS6 code sequence which can be used to perform burnup and criticality calculations based on functional modules from the SCALE-4 code system and libraries. The input file for the SAS6 control module is very similar to that of the other SAS and CSAS control modules available in the SCALE-4 system. Especially the geometry input of SAS6 is quite similar to that of SAS2H. However, the functionality of SAS6 is different from that of SAS2H. The geometry of the reactor lattice can be treated in more detail because use is made of the two-dimensional lattice code WIMS-D/IRI (An adapted version of WIMS-D/4) instead of the one-dimensional transport code XSDRNPM-S. Also the neutron absorption and production rates of nuclides not explicitly specified in the input can be accounted for by six pseudo nuclides. Furthermore, the centre pin can be subdivided into maximal 10 zones to improve the burnup calculation of the centre pin and to obtain more accurate k-infinite values for the assembly. Also the time step specification is more flexible than in the SAS2H sequence. (orig.)
Loladze, Irakli
2014-05-07
Mineral malnutrition stemming from undiversified plant-based diets is a top global challenge. In C3 plants (e.g., rice, wheat), elevated concentrations of atmospheric carbon dioxide (eCO2) reduce protein and nitrogen concentrations, and can increase the total non-structural carbohydrates (TNC; mainly starch, sugars). However, contradictory findings have obscured the effect of eCO2 on the ionome-the mineral and trace-element composition-of plants. Consequently, CO2-induced shifts in plant quality have been ignored in the estimation of the impact of global change on humans. This study shows that eCO2 reduces the overall mineral concentrations (-8%, 95% confidence interval: -9.1 to -6.9, p carbon:minerals in C3 plants. The meta-analysis of 7761 observations, including 2264 observations at state of the art FACE centers, covers 130 species/cultivars. The attained statistical power reveals that the shift is systemic and global. Its potential to exacerbate the prevalence of 'hidden hunger' and obesity is discussed.DOI: http://dx.doi.org/10.7554/eLife.02245.001. Copyright © 2014, Loladze.
Strikwold, Marije; Spenkelink, Bert; Woutersen, Ruud A; Rietjens, Ivonne M C M; Punt, Ans
2017-06-01
With our recently developed in vitro physiologically based kinetic (PBK) modelling approach, we could extrapolate in vitro toxicity data to human toxicity values applying PBK-based reverse dosimetry. Ideally information on kinetic differences among human individuals within a population should be considered. In the present study, we demonstrated a modelling approach that integrated in vitro toxicity data, PBK modelling and Monte Carlo simulations to obtain insight in interindividual human kinetic variation and derive chemical specific adjustment factors (CSAFs) for phenol-induced developmental toxicity. The present study revealed that UGT1A6 is the primary enzyme responsible for the glucuronidation of phenol in humans followed by UGT1A9. Monte Carlo simulations were performed taking into account interindividual variation in glucuronidation by these specific UGTs and in the oral absorption coefficient. Linking Monte Carlo simulations with PBK modelling, population variability in the maximum plasma concentration of phenol for the human population could be predicted. This approach provided a CSAF for interindividual variation of 2.0 which covers the 99th percentile of the population, which is lower than the default safety factor of 3.16 for interindividual human kinetic differences. Dividing the dose-response curve data obtained with in vitro PBK-based reverse dosimetry, with the CSAF provided a dose-response curve that reflects the consequences of the interindividual variability in phenol kinetics for the developmental toxicity of phenol. The strength of the presented approach is that it provides insight in the effect of interindividual variation in kinetics for phenol-induced developmental toxicity, based on only in vitro and in silico testing. © The Author 2017. Published by Oxford University Press on behalf of the Society of Toxicology. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
DEFF Research Database (Denmark)
Klösgen, Beate; Bruun, Sara; Hansen, Søren
The presence of a depletion layer of water along extended hydrophobic interfaces, and a possibly related formation of nanobubbles, is an ongoing discussion. The phenomenon was initially reported when we, years ago, chose thick films (~300-400Å) of polystyrene as cushions between a crystalline...... carrier and biomimetic membranes deposited thereupon and exposed to bulk water. While monitoring the sequential build-up of the sandwiched composite structure by continuous neutron reflectivity experiments the formation of an unexpected additional layer was detected (1). Located at the polystyrene surface...... in between he polymer cushion and bulk water the layer was attributed to water of reduced density and was called "depletion layer". Impurities or preparative artefacts were excluded as its origin. Later on, the formation of nanobubbles from this vapour-like water phase was initiated by tipping the surface...
Capital expenditure and depletion
International Nuclear Information System (INIS)
Rech, O.; Saniere, A.
2003-01-01
In the future, the increase in oil demand will be covered for the most part by non conventional oils, but conventional sources will continue to represent a preponderant share of the world oil supply. Their depletion represents a complex challenge involving technological, economic and political factors. At the same time, there is reason for concern about the decrease in exploration budgets at the major oil companies. (author)
Depleted uranium: A DOE management guide
International Nuclear Information System (INIS)
1995-10-01
The U.S. Department of Energy (DOE) has a management challenge and financial liability in the form of 50,000 cylinders containing 555,000 metric tons of depleted uranium hexafluoride (UF 6 ) that are stored at the gaseous diffusion plants. The annual storage and maintenance cost is approximately $10 million. This report summarizes several studies undertaken by the DOE Office of Technology Development (OTD) to evaluate options for long-term depleted uranium management. Based on studies conducted to date, the most likely use of the depleted uranium is for shielding of spent nuclear fuel (SNF) or vitrified high-level waste (HLW) containers. The alternative to finding a use for the depleted uranium is disposal as a radioactive waste. Estimated disposal costs, utilizing existing technologies, range between $3.8 and $11.3 billion, depending on factors such as applicability of the Resource Conservation and Recovery Act (RCRA) and the location of the disposal site. The cost of recycling the depleted uranium in a concrete based shielding in SNF/HLW containers, although substantial, is comparable to or less than the cost of disposal. Consequently, the case can be made that if DOE invests in developing depleted uranium shielded containers instead of disposal, a long-term solution to the UF 6 problem is attained at comparable or lower cost than disposal as a waste. Two concepts for depleted uranium storage casks were considered in these studies. The first is based on standard fabrication concepts previously developed for depleted uranium metal. The second converts the UF 6 to an oxide aggregate that is used in concrete to make dry storage casks
Fully Depleted Charge-Coupled Devices
International Nuclear Information System (INIS)
Holland, Stephen E.
2006-01-01
We have developed fully depleted, back-illuminated CCDs that build upon earlier research and development efforts directed towards technology development of silicon-strip detectors used in high-energy-physics experiments. The CCDs are fabricated on the same type of high-resistivity, float-zone-refined silicon that is used for strip detectors. The use of high-resistivity substrates allows for thick depletion regions, on the order of 200-300 um, with corresponding high detection efficiency for near-infrared and soft x-ray photons. We compare the fully depleted CCD to the p-i-n diode upon which it is based, and describe the use of fully depleted CCDs in astronomical and x-ray imaging applications
Guberina, Nika; Suntharalingam, Saravanabavaan; Naßenstein, Kai; Forsting, Michael; Theysohn, Jens; Wetter, Axel; Ringelstein, Adrian
2018-03-01
Background The importance of monitoring of the radiation dose received by the human body during computed tomography (CT) examinations is not negligible. Several dose-monitoring software tools emerged in order to monitor and control dose distribution during CT examinations. Some software tools incorporate Monte Carlo Simulation (MCS) and allow calculation of effective dose and organ dose apart from standard dose descriptors. Purpose To verify the results of a dose-monitoring software tool based on MCS in assessment of effective and organ doses in thoracic CT protocols. Material and Methods Phantom measurements were performed with thermoluminescent dosimeters (TLD LiF:Mg,Ti) using two different thoracic CT protocols of the clinical routine: (I) standard CT thorax (CTT); and (II) CTT with high-pitch mode, P = 3.2. Radiation doses estimated with MCS and measured with TLDs were compared. Results Inter-modality comparison showed an excellent correlation between MCS-simulated and TLD-measured doses ((I) after localizer correction r = 0.81; (II) r = 0.87). The following effective and organ doses were determined: (I) (a) effective dose = MCS 1.2 mSv, TLD 1.3 mSv; (b) thyroid gland = MCS 2.8 mGy, TLD 2.5 mGy; (c) thymus = MCS 3.1 mGy, TLD 2.5 mGy; (d) bone marrow = MCS 0.8 mGy, TLD 0.9 mGy; (e) breast = MCS 2.5 mGy, TLD 2.2 mGy; (f) lung = MCS 2.8 mGy, TLD 2.7 mGy; (II) (a) effective dose = MCS 0.6 mSv, TLD 0.7 mSv; (b) thyroid gland = MCS 1.4 mGy, TLD 1.8 mGy; (c) thymus = MCS 1.4 mGy, TLD 1.8 mGy; (d) bone marrow = MCS 0.4 mGy, TLD 0.5 mGy; (e) breast = MCS 1.1 mGy, TLD 1.1 mGy; (f) lung = MCS 1.2 mGy, TLD 1.3 mGy. Conclusion Overall, in thoracic CT protocols, organ doses simulated by the dose-monitoring software tool were coherent to those measured by TLDs. Despite some challenges, the dose-monitoring software was capable of an accurate dose calculation.
Ozone-depleting Substances (ODS)
U.S. Environmental Protection Agency — This site includes all of the ozone-depleting substances (ODS) recognized by the Montreal Protocol. The data include ozone depletion potentials (ODP), global warming...
Energy Technology Data Exchange (ETDEWEB)
García, Marcos Fernández; Sánchez, Javier González; Echeverría, Richard Jaramillo [Instituto de Física de Cantabria (CSIC-UC), Avda. los Castros s/n, E-39005 Santander (Spain); Moll, Michael [CERN, Organisation europénne pour la recherche nucléaire, CH-1211 Genéve 23 (Switzerland); Santos, Raúl Montero [SGIker Laser Facility, UPV/EHU, Sarriena, s/n - 48940 Leioa-Bizkaia (Spain); Moya, David [Instituto de Física de Cantabria (CSIC-UC), Avda. los Castros s/n, E-39005 Santander (Spain); Pinto, Rogelio Palomo [Departamento de Ingeniería Electrónica, Escuela Superior de Ingenieros Universidad de Sevilla (Spain); Vila, Iván [Instituto de Física de Cantabria (CSIC-UC), Avda. los Castros s/n, E-39005 Santander (Spain)
2017-02-11
For the first time, the deep n-well (DNW) depletion space of a High Voltage CMOS sensor has been characterized using a Transient Current Technique based on the simultaneous absorption of two photons. This novel approach has allowed to resolve the DNW implant boundaries and therefore to accurately determine the real depleted volume and the effective doping concentration of the substrate. The unprecedented spatial resolution of this new method comes from the fact that measurable free carrier generation in two photon mode only occurs in a micrometric scale voxel around the focus of the beam. Real three-dimensional spatial resolution is achieved by scanning the beam focus within the sample.
García, Marcos Fernández; Echeverría, Richard Jaramillo; Moll, Michael; Santos, Raúl Montero; Moya, David; Pinto, Rogelio Palomo; Vila, Iván
2016-01-01
For the first time, the deep n-well (DNW) depletion space of a High Voltage CMOS sensor has been characterized using a Transient Current Technique based on the simultaneous absorption of two photons. This novel approach has allowed to resolve the DNW implant boundaries and therefore to accurately determine the real depleted volume and the effective doping concentration of the substrate. The unprecedented spatial resolution of this new method comes from the fact that measurable free carrier generation in two photon mode only occurs in a micrometric scale voxel around the focus of the beam. Real three-dimensional spatial resolution is achieved by scanning the beam focus within the sample.
Status of Monte Carlo dose planning
International Nuclear Information System (INIS)
Mackie, T.R.
1995-01-01
Monte Carlo simulation will become increasing important for treatment planning for radiotherapy. The EGS4 Monte Carlo system, a general particle transport system, has been used most often for simulation tasks in radiotherapy although ETRAN/ITS and MCNP have also been used. Monte Carlo treatment planning requires that the beam characteristics such as the energy spectrum and angular distribution of particles emerging from clinical accelerators be accurately represented. An EGS4 Monte Carlo code, called BEAM, was developed by the OMEGA Project (a collaboration between the University of Wisconsin and the National Research Council of Canada) to transport particles through linear accelerator heads. This information was used as input to simulate the passage of particles through CT-based representations of phantoms or patients using both an EGS4 code (DOSXYZ) and the macro Monte Carlo (MMC) method. Monte Carlo computed 3-D electron beam dose distributions compare well to measurements obtained in simple and complex heterogeneous phantoms. The present drawback with most Monte Carlo codes is that simulation times are slower than most non-stochastic dose computation algorithms. This is especially true for photon dose planning. In the future dedicated Monte Carlo treatment planning systems like Peregrine (from Lawrence Livermore National Laboratory), which will be capable of computing the dose from all beam types, or the Macro Monte Carlo (MMC) system, which is an order of magnitude faster than other algorithms, may dominate the field
Borrmann, Robin
2010-01-01
This article examines whether the use of Depleted Uranium (DU) munitions can be considered illegal under current public international law. The analysis covers the law of arms control and focuses in particular on international humanitarian law. The article argues that DU ammunition cannot be addressed adequately under existing treaty based weapon bans, such as the Chemical Weapons Convention, due to the fact that DU does not meet the criteria required to trigger the applicability of those treaties. Furthermore, it is argued that continuing uncertainties regarding the effects of DU munitions impedes a reliable review of the legality of their use under various principles of international law, including the prohibition on employing indiscriminate weapons; the prohibition on weapons that are intended, or may be expected, to cause widespread, long-term and severe damage to the natural environment; and the prohibition on causing unnecessary suffering or superfluous injury. All of these principles require complete knowledge of the effects of the weapon in question. Nevertheless, the author argues that the same uncertainty places restrictions on the use of DU under the precautionary principle. The paper concludes with an examination of whether or not there is a need for--and if so whether there is a possibility of achieving--a Convention that comprehensively outlaws the use, transfer and stockpiling of DU weapons, as proposed by some non-governmental organisations (NGOs).
ALEPH 1.1.2: A Monte Carlo burn-up code
International Nuclear Information System (INIS)
Haeck, W.; Verboomen, B.
2006-01-01
In the last 40 years, Monte Carlo particle transport has been applied to a multitude of problems such as shielding and medical applications, to various types of nuclear reactors, . . . The success of the Monte Carlo method is mainly based on its broad application area, on its ability to handle nuclear data not only in its most basic but also most complex form (namely continuous energy cross sections, complex interaction laws, detailed energy-angle correlations, multi-particle physics, . . . ), on its capability of modeling geometries from simple 1D to complex 3D, . . . There is also a current trend in Monte Carlo applications toward high detail 3D calculations (for instance voxel-based medical applications), something for which deterministic codes are neither suited nor performant as to computational time and precision. Apart from all these fields where Monte Carlo particle transport has been applied successfully, there is at least one area where Monte Carlo has had limited success, namely burn-up and activation calculations where the time parameter is added to the problem. The concept of Monte Carlo burn-up consists of coupling a Monte Carlo code to a burn-up module to improve the accuracy of depletion and activation calculations. For every time step the Monte Carlo code will provide reaction rates to the burn-up module which will return new material compositions to the Monte Carlo code. So if static Monte Carlo particle transport is slow, then Monte Carlo particle transport with burn-up will be even slower as calculations have to be performed for every time step in the problem. The computational issues to perform accurate Monte Carlo calculations are however continuously reduced due to improvements made in the basic Monte Carlo algorithms, due to the development of variance reduction techniques and due to developments in computer architecture (more powerful processors, the so-called brute force approach through parallel processors and networked systems
Kolb, Max; Pereira de Oliveira, Luis; Verstraete, Jan
2013-03-01
A novel kinetic modeling strategy for refining processes for heavy petroleum fractions is proposed. The approach allows to overcome the notorious lack of molecular details in describing the petroleum fractions. The simulation of the reactions process consists of a two-step procedure. In the first step, a mixture of molecules representing the feedstock of the process is generated via two sucessive molecular reconstruction algorithms. The first algorithm, termed stochastic reconstruction, generates an equimolar set of molecules with the appropriate analytical properties via a Monte Carlo method. The second algorithm, called reconstruction by entropy maximization, adjusts the molar fractions of the generated molecules in order to further improve the properties of the mixture. In the second step, a kinetic Monte Carlo method is used to simulate the effect of the refining reactions on the previously generated set of molecules. The full two-step methodology has been applied to the hydrotreating of LCO gas oils and to the hydrocracking of vacuum residues from different origins (e.g. Athabasca).
Energy Technology Data Exchange (ETDEWEB)
Taleei, R; Qin, N; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States); Peeler, C [UT MD Anderson Cancer Center, Houston, TX (United States); Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)
2016-06-15
Purpose: Biological treatment plan optimization is of great interest for proton therapy. It requires extensive Monte Carlo (MC) simulations to compute physical dose and biological quantities. Recently, a gPMC package was developed for rapid MC dose calculations on a GPU platform. This work investigated its suitability for proton therapy biological optimization in terms of accuracy and efficiency. Methods: We performed simulations of a proton pencil beam with energies of 75, 150 and 225 MeV in a homogeneous water phantom using gPMC and FLUKA. Physical dose and energy spectra for each ion type on the central beam axis were scored. Relative Biological Effectiveness (RBE) was calculated using repair-misrepair-fixation model. Microdosimetry calculations were performed using Monte Carlo Damage Simulation (MCDS). Results: Ranges computed by the two codes agreed within 1 mm. Physical dose difference was less than 2.5 % at the Bragg peak. RBE-weighted dose agreed within 5 % at the Bragg peak. Differences in microdosimetric quantities such as dose average lineal energy transfer and specific energy were < 10%. The simulation time per source particle with FLUKA was 0.0018 sec, while gPMC was ∼ 600 times faster. Conclusion: Physical dose computed by FLUKA and gPMC were in a good agreement. The RBE differences along the central axis were small, and RBE-weighted dose difference was found to be acceptable. The combined accuracy and efficiency makes gPMC suitable for proton therapy biological optimization.
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Directory of Open Access Journals (Sweden)
J. D. Rösevall
2007-01-01
Full Text Available The objective of this study is to demonstrate how polar ozone depletion can be mapped and quantified by assimilating ozone data from satellites into the wind driven transport model DIAMOND, (Dynamical Isentropic Assimilation Model for OdiN Data. By assimilating a large set of satellite data into a transport model, ozone fields can be built up that are less noisy than the individual satellite ozone profiles. The transported fields can subsequently be compared to later sets of incoming satellite data so that the rates and geographical distribution of ozone depletion can be determined. By tracing the amounts of solar irradiation received by different air parcels in a transport model it is furthermore possible to study the photolytic reactions that destroy ozone. In this study, destruction of ozone that took place in the Antarctic winter of 2003 and in the Arctic winter of 2002/2003 have been examined by assimilating ozone data from the ENVISAT/MIPAS and Odin/SMR satellite-instruments. Large scale depletion of ozone was observed in the Antarctic polar vortex of 2003 when sunlight returned after the polar night. By mid October ENVISAT/MIPAS data indicate vortex ozone depletion in the ranges 80–100% and 70–90% on the 425 and 475 K potential temperature levels respectively while the Odin/SMR data indicates depletion in the ranges 70–90% and 50–70%. The discrepancy between the two instruments has been attributed to systematic errors in the Odin/SMR data. Assimilated fields of ENVISAT/MIPAS data indicate ozone depletion in the range 10–20% on the 475 K potential temperature level, (~19 km altitude, in the central regions of the 2002/2003 Arctic polar vortex. Assimilated fields of Odin/SMR data on the other hand indicate ozone depletion in the range 20–30%.
Consequences of biome depletion
International Nuclear Information System (INIS)
Salvucci, Emiliano
2013-01-01
The human microbiome is an integral part of the superorganism together with their host and they have co-evolved since the early days of the existence of the human species. The modification of the microbiome as a result changes in food and social habits of human beings throughout their life history has led to the emergence of many diseases. In contrast with the Darwinian view of nature of selfishness and competence, new holistic approaches are rising. Under these views, the reconstitution of the microbiome comes out as a fundamental therapy for emerging diseases related to biome depletion.
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Directory of Open Access Journals (Sweden)
Bardenet Rémi
2013-07-01
Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.
A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions
Liang, Yihao; Xing, Xiangjun; Li, Yaohang
2017-06-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
Energy Technology Data Exchange (ETDEWEB)
Meshkian, Mohsen, E-mail: mohsenm@ethz.ch
2016-02-01
Neutron radiography is rapidly extending as one of the methods for non-destructive screening of materials. There are various parameters to be studied for optimising imaging screens and image quality for different fast-neutron radiography systems. Herein, a Geant4 Monte Carlo simulation is employed to evaluate the response of a fast-neutron radiography system using a {sup 252}Cf neutron source. The neutron radiography system is comprised of a moderator as the neutron-to-proton converter with suspended silver-activated zinc sulphide (ZnS(Ag)) as the phosphor material. The neutron-induced protons deposit energy in the phosphor which consequently emits scintillation light. Further, radiographs are obtained by simulating the overall radiography system including source and sample. Two different standard samples are used to evaluate the quality of the radiographs.
Asadi, Somayeh; Vaez-zadeh, Mehdi; Masoudi, S Farhad; Rahmani, Faezeh; Knaup, Courtney; Meigooni, Ali S
2015-09-08
The effects of gold nanoparticles (GNPs) in 125I brachytherapy dose enhancement on choroidal melanoma are examined using the Monte Carlo simulation technique. Usually, Monte Carlo ophthalmic brachytherapy dosimetry is performed in a water phantom. However, here, the compositions of human eye have been considered instead of water. Both human eye and water phantoms have been simulated with MCNP5 code. These simulations were performed for a fully loaded 16 mm COMS eye plaque containing 13 125I seeds. The dose delivered to the tumor and normal tissues have been calculated in both phantoms with and without GNPs. Normally, the radiation therapy of cancer patients is designed to deliver a required dose to the tumor while sparing the surrounding normal tissues. However, as the normal and cancerous cells absorbed dose in an almost identical fashion, the normal tissue absorbed radiation dose during the treatment time. The use of GNPs in combination with radiotherapy in the treatment of tumor decreases the absorbed dose by normal tissues. The results indicate that the dose to the tumor in an eyeball implanted with COMS plaque increases with increasing GNPs concentration inside the target. Therefore, the required irradiation time for the tumors in the eye is decreased by adding the GNPs prior to treatment. As a result, the dose to normal tissues decreases when the irradiation time is reduced. Furthermore, a comparison between the simulated data in an eye phantom made of water and eye phantom made of human eye composition, in the presence of GNPs, shows the significance of utilizing the composition of eye in ophthalmic brachytherapy dosimetry Also, defining the eye composition instead of water leads to more accurate calculations of GNPs radiation effects in ophthalmic brachytherapy dosimetry.
Energy Technology Data Exchange (ETDEWEB)
Feck, Norbert; Wagner, Hermann-Josef [Bochum Univ. (Germany). Lehrstuhl fuer Energiesysteme und Energiewirtschaft
2008-07-01
Uncertainties of the data in lifecycle assesment of new energy systems can considered by an Monte-Carlo simulation. These uncertain data are reproduced by probability distributions. Afterwards a stochastic simulation is performed. The article presents the results of such a simulation for a geothermal heatplan using the hot-dry-rock-technology. (orig.)
Microscopic to macroscopic depletion model development for FORMOSA-P
International Nuclear Information System (INIS)
Noh, J.M.; Turinsky, P.J.; Sarsour, H.N.
1996-01-01
Microscopic depletion has been gaining popularity with regard to employment in reactor core nodal calculations, mainly attributed to the superiority of microscopic depletion in treating spectral history effects during depletion. Another trend is the employment of loading pattern optimization computer codes in support of reload core design. Use of such optimization codes has significantly reduced design efforts to optimize reload core loading patterns associated with increasingly complicated lattice designs. A microscopic depletion model has been developed for the FORMOSA-P pressurized water reactor (PWR) loading pattern optimization code. This was done for both fidelity improvements and to make FORMOSA-P compatible with microscopic-based nuclear design methods. Needless to say, microscopic depletion requires more computational effort compared with macroscopic depletion. This implies that microscopic depletion may be computationally restrictive if employed during the loading pattern optimization calculation because many loading patterns are examined during the course of an optimization search. Therefore, the microscopic depletion model developed here uses combined models of microscopic and macroscopic depletion. This is done by first performing microscopic depletions for a subset of possible loading patterns from which 'collapsed' macroscopic cross sections are obtained. The collapsed macroscopic cross sections inherently incorporate spectral history effects. Subsequently, the optimization calculations are done using the collapsed macroscopic cross sections. Using this approach allows maintenance of microscopic depletion level accuracy without substantial additional computing resources
Bence, Melinda; Jankovics, Ferenc; Lukácsovich, Tamás; Erdélyi, Miklós
2017-04-01
Inducible protein degradation techniques have considerable advantages over classical genetic approaches, which generate loss-of-function phenotypes at the gene or mRNA level. The plant-derived auxin-inducible degradation system (AID) is a promising technique which enables the degradation of target proteins tagged with the AID motif in nonplant cells. Here, we present a detailed characterization of this method employed during the adult oogenesis of Drosophila. Furthermore, with the help of CRISPR/Cas9-based genome editing, we improve the utility of the AID system in the conditional elimination of endogenously expressed proteins. We demonstrate that the AID system induces efficient and reversible protein depletion of maternally provided proteins both in the ovary and the early embryo. Moreover, the AID system provides a fine spatiotemporal control of protein degradation and allows for the generation of different levels of protein knockdown in a well-regulated manner. These features of the AID system enable the unraveling of the discrete phenotypes of genes with highly complex functions. We utilized this system to generate a conditional loss-of-function allele which allows for the specific degradation of the Vasa protein without affecting its alternative splice variant (solo) and the vasa intronic gene (vig). With the help of this special allele, we demonstrate that dramatic decrease of Vasa protein in the vitellarium does not influence the completion of oogenesis as well as the establishment of proper anteroposterior and dorsoventral polarity in the developing oocyte. Our study suggests that both the localization and the translation of gurken mRNA in the vitellarium is independent from Vasa. © 2017 Federation of European Biochemical Societies.
Chuang, Ching-Cheng; Lee, Chia-Yen; Chen, Chung-Ming; Hsieh, Yao-Sheng; Liu, Tsan-Chi; Sun, Chia-Wei
2012-05-01
This study proposed diffuser-aided diffuse optical imaging (DADOI) as a new approach to improve the performance of the conventional diffuse optical tomography (DOT) approach for breast imaging. The 3-D breast model for Monte Carlo simulation is remodeled from clinical MRI image. The modified Beer-Lambert's law is adopted with the DADOI approach to substitute the complex algorithms of inverse problem for mapping of spatial distribution, and the depth information is obtained based on the time-of-flight estimation. The simulation results demonstrate that the time-resolved Monte Carlo method can be capable of performing source-detector separations analysis. The dynamics of photon migration with various source-detector separations are analyzed for the characterization of breast tissue and estimation of optode arrangement. The source-detector separations should be less than 4 cm for breast imaging in DOT system. Meanwhile, the feasibility of DADOI was manifested in this study. In the results, DADOI approach can provide better imaging contrast and faster imaging than conventional DOT measurement. The DADOI approach possesses great potential to detect the breast tumor in early stage and chemotherapy monitoring that implies a good feasibility for clinical application.
International Nuclear Information System (INIS)
Nagaya, Yasunobu; Okumura, Keisuke; Mori, Takamasa; Nakagawa, Masayuki
2005-06-01
In order to realize fast and accurate Monte Carlo simulation of neutron and photon transport problems, two vectorized Monte Carlo codes MVP and GMVP have been developed at JAERI. MVP is based on the continuous energy model and GMVP is on the multigroup model. Compared with conventional scalar codes, these codes achieve higher computation speed by a factor of 10 or more on vector super-computers. Both codes have sufficient functions for production use by adopting accurate physics model, geometry description capability and variance reduction techniques. The first version of the codes was released in 1994. They have been extensively improved and new functions have been implemented. The major improvements and new functions are (1) capability to treat the scattering model expressed with File 6 of the ENDF-6 format, (2) time-dependent tallies, (3) reaction rate calculation with the pointwise response function, (4) flexible source specification, (5) continuous-energy calculation at arbitrary temperatures, (6) estimation of real variances in eigenvalue problems, (7) point detector and surface crossing estimators, (8) statistical geometry model, (9) function of reactor noise analysis (simulation of the Feynman-α experiment), (10) arbitrary shaped lattice boundary, (11) periodic boundary condition, (12) parallelization with standard libraries (MPI, PVM), (13) supporting many platforms, etc. This report describes the physical model, geometry description method used in the codes, new functions and how to use them. (author)
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Compressible ferrimagnetism in the depleted periodic Anderson model
Costa, N. C.; Araújo, M. V.; Lima, J. P.; Paiva, T.; dos Santos, R. R.; Scalettar, R. T.
2018-02-01
Tight-binding Hamiltonians with single and multiple orbitals exhibit an intriguing array of magnetic phase transitions. In most cases the spin ordered phases are insulating, while the disordered phases may be either metallic or insulating. In this paper we report a determinant quantum Monte Carlo study of interacting electrons in a geometry which can be regarded as a two-dimensional periodic Anderson model with depleted interacting (f ) orbitals. For a single depletion, we observe an enhancement of antiferromagnetic correlations and formation of localized states. For half of the f orbitals regularly depleted, the system exhibits a ferrimagnetic ground state. We obtain a quantitative determination of the nature of magnetic order, which we discuss in the context of Tsunetsugu's theorem, and show that, although the dc conductivity indicates insulating behavior at half filling, the compressibility remains finite.
International Nuclear Information System (INIS)
Hussein, A.S.
2005-01-01
Depleted Uranium (DU) is the waste product of uranium enrichment from the manufacturing of fuel rods for nuclear reactors in nuclear power plants and nuclear power ships. DU may also results from the reprocessing of spent nuclear reactor fuel. Potentially DU has both chemical and radiological toxicity with two important targets organs being the kidney and the lungs. DU is made into a metal and, due to its availability, low price, high specific weight, density and melting point as well as its pyrophoricity; it has a wide range of civilian and military applications. Due to the use of DU over the recent years, there appeared in some press on health hazards that are alleged to be due to DU. In these paper properties, applications, potential environmental and health effects of DU are briefly reviewed
Is Monte Carlo embarrassingly parallel?
International Nuclear Information System (INIS)
Hoogenboom, J. E.
2012-01-01
Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)
Directory of Open Access Journals (Sweden)
Khan Hamda
2017-01-01
Full Text Available This paper carries out a Monte Carlo simulation of a landmine detection system, using the MCNP5 code, for the detection of concealed explosives such as trinitrotoluene and cyclonite. In portable field detectors, the signal strength of backscattered neutrons and gamma rays from thermal neutron activation is sensitive to a number of parameters such as the mass of explosive, depth of concealment, neutron moderation, background soil composition, soil porosity, soil moisture, multiple scattering in the background material, and configuration of the detection system. In this work, a detection system, with BF3 detectors for neutrons and sodium iodide scintillator for g-rays, is modeled to investigate the neutron signal-to-noise ratio and to obtain an empirical formula for the photon production rate Ri(n,γ= SfGfMf(d,m from radiative capture reactions in constituent nuclides of trinitrotoluene. This formula can be used for the efficient landmine detection of explosives in quantities as small as ~200 g of trinitrotoluene concealed at depths down to about 15 cm. The empirical formula can be embedded in a field programmable gate array on a field-portable explosives' sensor for efficient online detection.
Directory of Open Access Journals (Sweden)
Song Hyun Kim
2015-08-01
Full Text Available Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.
Otsuki, Soichi
2018-04-01
Polarimetric imaging of absorbing, strongly scattering, or birefringent inclusions is investigated in a negligibly absorbing, moderately scattering, and isotropic slab medium. It was proved that the reduced effective scattering Mueller matrix is exactly calculated from experimental or simulated raw matrices even if the medium is anisotropic and/or heterogeneous, or the outgoing light beam exits obliquely to the normal of the slab surface. The calculation also gives a reasonable approximation of the reduced matrix using a light beam with a finite diameter for illumination. The reduced matrix was calculated using a Monte Carlo simulation and was factorized in two dimensions by the Lu-Chipman polar decomposition. The intensity of backscattered light shows clear and modestly clear differences for absorbing and strongly scattering inclusions, respectively, whereas it shows no difference for birefringent inclusions. Conversely, some polarization parameters, for example, the selective depolarization coefficients exhibit only a slight difference for the absorbing inclusions, whereas they showed clear difference for the strongly scattering or birefringent inclusions. Moreover, these quantities become larger with increasing the difference in the optical properties of the inclusions relative to the surrounding medium. However, it is difficult to recognize inclusions that buried at the depth deeper than 3 mm under the surface. Thus, the present technique can detect the approximate shape and size of these inclusions, and considering the depth where inclusions lie, estimate their optical properties. This study reveals the possibility of the polarization-sensitive imaging of turbid inhomogeneous media using a pencil beam for illumination.
Directory of Open Access Journals (Sweden)
J. Tonttila
2013-08-01
Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.
The Toxicity of Depleted Uranium
Briner, Wayne
2010-01-01
Depleted uranium (DU) is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a c...
Groundwater depletion embedded in international food trade
Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.
2017-03-01
Recent hydrological modelling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world’s food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world’s population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.
DEFF Research Database (Denmark)
Strunk, Astrid; Knudsen, Mads Faurschou; Larsen, Nicolaj Krog
, and ii) glacial periods characterized by 100 % shielding and a uniform glacial erosion rate. We incorporate the exposure/burial history in the model framework by applying a threshold value to the global marine benthic d18O record and include the threshold value as a free model parameter, hereby taking...... investigate the landscape history in eastern and western Greenland by applying a novel Markov Chain Monte Carlo (MCMC) inversion approach to the existing 10Be-26Al data from these regions. The new MCMC approach allows us to constrain the most likely landscape history based on comparisons between simulated...... and measured cosmogenic nuclide concentrations. It is a fundamental assumption of the model approach that the exposure history at the site/location can be divided into two distinct regimes: i) interglacial periods characterized by zero shielding due to overlying ice and a uniform interglacial erosion rate...
International Nuclear Information System (INIS)
Burlon, Alejandro A.; Kreiner, Andres J.; Minsky, Daniel; Valda, Alejandro A.; Somacal, Hector R.
2003-01-01
In order to optimize a neutron production target for accelerator-based boron neutron capture therapy (AB-BNCT) a Monte Carlo Neutron and Photon (MCNP) investigation has been performed. Neutron fields from a LiF thick target (with both a D 2 O-graphite and a Al/AlF 3 -graphite moderator/reflector assembly) were evaluated along the centerline in a head phantom. The target neutron beam was simulated from the 7 Li(p,n) 7 Be nuclear reaction for 1.89, 2.0 and 2.3 MeV protons. The results show that it is more advantageous to irradiate the target with near resonance energy protons (2.3 MeV) because of the high neutron yield at this energy. On the other hand, the Al/AlF 3 -graphite exhibits a more efficient performance than D 2 O. (author)
International Nuclear Information System (INIS)
Aldana, S; García-Fernández, P; Jiménez-Molinos, F; Gómez-Campos, F; Roldán, J B; Rodríguez-Fernández, Alberto; Romero-Zaliz, R; González, M B; Campabadal, F
2017-01-01
A new RRAM simulation tool based on a 3D kinetic Monte Carlo algorithm has been implemented. The redox reactions and migration of cations are developed taking into consideration the temperature and electric potential 3D distributions within the device dielectric at each simulation time step. The filamentary conduction has been described by obtaining the percolation paths formed by metallic atoms. Ni/HfO 2 /Si-n + unipolar devices have been fabricated and measured. The different experimental characteristics of the devices under study have been reproduced with accuracy by means of simulations. The main physical variables can be extracted at any simulation time to clarify the physics behind resistive switching; in particular, the final conductive filament shape can be studied in detail. (paper)
Song, Sangha; Elgezua, Inko; Kobayashi, Yo; Fujie, Masakatsu G
2013-01-01
In biomedical, Monte-carlo simulation is commonly used for simulation of light diffusion in tissue. But, most of previous studies did not consider a radial beam LED as light source. Therefore, we considered characteristics of a radial beam LED and applied them on MC simulation as light source. In this paper, we consider 3 characteristics of radial beam LED. The first is an initial launch area of photons. The second is an incident angle of a photon at an initial photon launching area. The third is the refraction effect according to contact area between LED and a turbid medium. For the verification of the MC simulation, we compared simulation and experimental results. The average of the correlation coefficient between simulation and experimental results is 0.9954. Through this study, we show an effective method to simulate light diffusion on tissue with characteristics for radial beam LED based on MC simulation.
International Nuclear Information System (INIS)
Vincent, E.; Becquart, C.S.; Domain, C.
2007-01-01
The embrittlement of pressure vessel steels under radiation has been long ago correlated with the presence of Cu solutes. Other solutes such as Ni, Mn and Si are now suspected to contribute also to the embrittlement. The interactions of these solutes with radiation induced point defects thus need to be characterized properly in order to understand the elementary mechanisms behind the formation of the clusters formed upon radiation. Ab initio calculations based on the density functional theory have been performed to determine the interactions of point defects with solute atoms in dilute FeX alloys (X = Cu, Mn, Ni or Si) in order to build a database used to parameterise an atomic kinetic Monte Carlo model. Some results of irradiation damage in dilute Fe-CuNiMnSi alloys obtained with this model are presented
Vincent, E.; Becquart, C. S.; Domain, C.
2007-02-01
The embrittlement of pressure vessel steels under radiation has been long ago correlated with the presence of Cu solutes. Other solutes such as Ni, Mn and Si are now suspected to contribute also to the embrittlement. The interactions of these solutes with radiation induced point defects thus need to be characterized properly in order to understand the elementary mechanisms behind the formation of the clusters formed upon radiation. Ab initio calculations based on the density functional theory have been performed to determine the interactions of point defects with solute atoms in dilute FeX alloys (X = Cu, Mn, Ni or Si) in order to build a database used to parameterise an atomic kinetic Monte Carlo model. Some results of irradiation damage in dilute Fe-CuNiMnSi alloys obtained with this model are presented.
Analytical Modeling of Unsteady Aluminum Depletion in Thermal Barrier Coatings
YEŞİLATA, Bülent
2014-01-01
The oxidation behavior of thermal barrier coatings (TBCs) in aircraft turbines is studied. A simple, unsteady and one-dimensional, diffusion model based on aluminum depletion from a bond-coat to form an oxide layer of Al2O3 is introduced. The model is employed for a case study with currently available experimental data. The diffusion coefficient of the depleted aluminum in the alloy, the concentration profiles at different oxidation times, and the thickness of Al-depleted region are...
International Nuclear Information System (INIS)
Padurariu, Leontin; Enachescu, Cristian; Mitoseriu, Liliana
2011-01-01
The properties induced by the M 4+ addition (M = Zr, Sn, Hf) in BaM x Ti 1-x O 3 solid solutions have been described on the basis of a 2D Ising-like network and Monte Carlo calculations, in which BaMO 3 randomly distributed unit cells were considered as being non-ferroelectric. The polarization versus temperature dependences when increasing the M 4+ concentration (x) showed a continuous reduction of the remanent polarization and of the critical temperature corresponding to the ferroelectric-paraelectric transition and a modification from a first-order to a second-order phase transition with a broad temperature range for which the transition takes place, as commonly reported for relaxors. The model also describes the system's tendency to reduce the polar clusters' average size while increasing their stability in time at higher temperatures above the Curie range, when a ferroelectric-relaxor crossover is induced by increasing the substitution (x). The equilibrium micropolar states during the polarization reversal process while describing the P(E) loops were comparatively monitored for the ferroelectric (x = 0) and relaxor (x = 0.3) states. Polarization reversal in relaxor compositions proceeds by the growth of several nucleated domains (the 'labyrinthine domain pattern') instead of the large scale domain formation typical for the ferroelectric state. The spatial and temporal evolution of the polar clusters in BaM x Ti 1-x O 3 solid solutions at various x has also been described by the correlation length and correlation time. As expected for the ferroelectric-relaxor crossover characterized by a progressive increasing degree of disorder, local fluctuations cause a reducing correlation time when the substitution degree increases, at a given temperature. The correlation time around the Curie temperature increases, reflecting the increasing stability in time of some polar nanoregions in relaxors in comparison with ferroelectrics, which was experimentally proved in
Energy Technology Data Exchange (ETDEWEB)
Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov [Division of Imaging, Diagnostics, and Software Reliability, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland 20993 (United States)
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying
Dong, Han; Sharma, Diksha; Badano, Aldo
2014-12-01
Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridmantis, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webmantis and visualmantis to facilitate the setup of computational experiments via hybridmantis. The visualization tools visualmantis and webmantis enable the user to control simulation properties through a user interface. In the case of webmantis, control via a web browser allows access through mobile devices such as smartphones or tablets. webmantis acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridmantis. The users can download the output images and statistics through a zip file for future reference. In addition, webmantis provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. The visualization tools visualmantis and webmantis provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying input parameters to receiving visual
International Nuclear Information System (INIS)
Dong, Han; Sharma, Diksha; Badano, Aldo
2014-01-01
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying
Hobbs, R F; Jentzen, W; Bockisch, A; Sgouros, G
2013-03-01
Salivary gland toxicity is of concern in radioiodine treatment of thyroid cancer. Toxicity is often observed while the estimated radiation absorbed dose (AD) values are below expected toxicity thresholds. Monte Carlo-based voxelized 3-dimensional radiobiological dosimetry (3D-RD) calculations of the salivary glands from eight metastatic thyroid cancer patients treated with 131I are presented with the objective of resolving this discrepancy. GEANT4 Monte Carlo simulations were performed for 131I, based on pretherapeutic 124I PET/CT imaging corrected for partial volume effect, and the results scaled to the therapeutic administered activities. For patients with external regions of high uptake proximal to the salivary glands, such as thyroid remnants or lymph node metastases, separate simulations were run to quantify the AD contributions from both (A) the salivary glands themselves, and (B) the external proximal region of high uptake (present for five patients). The contribution from the whole body outside the field of view was also estimated using modeling. Voxelized and average ADs and biological effective doses (BEDs) were calculated. The estimated average therapeutic ADs were 2.26 Gy considering all contributions and 1.94 Gy from the self-dose component only. The average contribution from the external region of high uptake was 0.54 Gy. This difference was more pronounced for the submandibular glands (2.64 versus 2.10 Gy) compared to the parotid glands (1.88 Gy versus 1.78 Gy). The BED values were on average only 6.6 % higher than (2.41 Gy) the ADs. The external sources of activity contribute significantly to the salivary gland AD, however neither this contribution, nor the radiobiological effect quantified by the BED are in themselves sufficient to explain the clinically observed toxicity.
Energy Technology Data Exchange (ETDEWEB)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described
Energy Technology Data Exchange (ETDEWEB)
Geramifar, P. [Faculty of Physics and Nuclear Engineering, Amir Kabir University of Technology (Tehran Polytechnic), Tehran (Iran, Islamic Republic of); Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Shariati Hospital, Tehran (Iran, Islamic Republic of); Ay, M.R., E-mail: mohammadreza_ay@tums.ac.ir [Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Shariati Hospital, Tehran (Iran, Islamic Republic of); Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Shamsaie Zafarghandi, M. [Faculty of Physics and Nuclear Engineering, Amir Kabir University of Technology (Tehran Polytechnic), Tehran (Iran, Islamic Republic of); Sarkar, S. [Research Center for Science and Technology in Medicine, Tehran University of Medical Sciences, Tehran (Iran, Islamic Republic of); Research Institute for Nuclear Medicine, Tehran University of Medical Sciences, Shariati Hospital, Tehran (Iran, Islamic Republic of); Loudos, G. [Department of Medical Instruments Technology, Technological Educational Institute, Athens (Greece); Rahmim, A. [Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore (United States); Department of Electrical and Computer Engineering, School of Engineering, Johns Hopkins University, Baltimore (United States)
2011-06-11
The advent of fast scintillators yielding great light yield and/or stopping power, along with advances in photomultiplier tubes and electronics, have rekindled interest in time-of-flight (TOF) PET. Because the potential performance improvements offered by TOF PET are substantial, efforts to improve PET timing should prove very fruitful. In this study, we performed Monte Carlo simulations to explore what gains in PET performance could be achieved if the coincidence resolving time (CRT) in the LYSO-based PET component of Discovery RX PET/CT scanner were improved. For this purpose, the GATE Monte Carlo package was utilized, providing the ability to model and characterize various physical phenomena in PET imaging. For the present investigation, count rate performance and signal to noise ratio (SNR) values in different activity concentrations were simulated for different coincidence timing windows of 4, 5.85, 6, 6.5, 8, 10 and 12 ns and with different CRTs of 100-900 ps FWHM involving 50 ps FWHM increments using the NEMA scatter phantom. Strong evidence supporting robustness of the simulations was found as observed in the good agreement between measured and simulated data for the cases of estimating axial sensitivity, axial and transaxial detection position, gamma non-collinearity angle distribution and positron annihilation distance. In the non-TOF context, the results show that the random event rate can be reduced by using narrower coincidence timing window widths, demonstrating considerable enhancements in the peak noise equivalent count rate (NECR) performance. The peak NECR had increased by {approx}50% when utilizing the coincidence window width of 4 ns. At the same time, utilization of TOF information resulted in improved NECR and SNR with the dramatic reduction of random coincidences as a function of CRT. For example, with CRT of 500 ps FWHM, a factor of 2.3 reduction in random rates, factor of 1.5 increase in NECR and factor of 2.1 improvement in SNR is
Torres Berdeguez, Mirta Bárbara; Thomas, Sylvia; Rafful, Patricia; Arruda Sanchez, Tiago; Medeiros Oliveira Ramos, Susie; Souza Albernaz, Marta; Vasconcellos de Sá, Lidia; Lopes de Souza, Sergio Augusto; Mas Milian, Felix; Silva, Ademir Xavier da
2017-07-01
Recently, there has been a growing interest in a methodology for dose planning in radiosynoviorthesis to substitute fixed activity. Clinical practice based on fixed activity frequently does not embrace radiopharmaceutical dose optimization in patients. The aim of this paper is to propose and discuss a dose planning methodology considering the radiological findings of interest obtained by three-dimensional magnetic resonance imaging combined with Monte Carlo simulation in radiosynoviorthesis treatment applied to hemophilic arthropathy. The parameters analyzed were: surface area of the synovial membrane (synovial size), synovial thickness and joint effusion obtained by 3D MRI of nine knees from nine patients on a SIEMENS AVANTO 1.5 T scanner using a knee coil. The 3D Slicer software performed both the semiautomatic segmentation and quantitation of these radiological findings. A Lucite phantom 3D MRI validated the quantitation methodology. The study used Monte Carlo N-Particle eXtended code version 2.6 for calculating the S-values required to set up the injected activity to deliver a 100 Gy absorbed dose at a determined synovial thickness. The radionuclides assessed were: 90Y, 32P, 188Re, 186Re, 153Sm, and 177Lu, and the present study shows their effective treatment ranges. The quantitation methodology was successfully tested, with an error below 5% for different materials. S-values calculated could provide data on the activity to be injected into the joint, considering no extra-articular leakage from joint cavity. Calculation of effective treatment range could assist with the therapeutic decision, with an optimized protocol for dose prescription in RSO. Using 3D Slicer software, this study focused on segmentation and quantitation of radiological features such as joint effusion, synovial size, and thickness, all obtained by 3D MRI in patients' knees with hemophilic arthropathy. The combination of synovial size and thickness with the parameters obtained by Monte Carlo
Depletable resources and the economy
Heijman, W.J.M.
1991-01-01
The subject of this thesis is the depletion of scarce resources. The main question to be answered is how to avoid future resource crises. After dealing with the complex relation between nature and economics, three important concepts in relation with resource depletion are discussed: steady state,
The Toxicity of Depleted Uranium
Directory of Open Access Journals (Sweden)
Wayne Briner
2010-01-01
Full Text Available Depleted uranium (DU is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a clear and defined set of symptoms. Chronic low-dose, or subacute, exposure to depleted uranium alters the appearance of milestones in developing organisms. Adult animals that were exposed to depleted uranium during development display persistent alterations in behavior, even after cessation of depleted uranium exposure. Adult animals exposed to depleted uranium demonstrate altered behaviors and a variety of alterations to brain chemistry. Despite its reduced level of radioactivity evidence continues to accumulate that depleted uranium, if ingested, may pose a radiologic hazard. The current state of knowledge concerning DU is discussed.
Wormhole Hamiltonian Monte Carlo
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2015-01-01
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function. PMID:25861551
Wormhole Hamiltonian Monte Carlo.
Lan, Shiwei; Streets, Jeffrey; Shahbaba, Babak
2014-07-31
In machine learning and statistics, probabilistic inference involving multimodal distributions is quite difficult. This is especially true in high dimensional problems, where most existing algorithms cannot easily move from one mode to another. To address this issue, we propose a novel Bayesian inference approach based on Markov Chain Monte Carlo. Our method can effectively sample from multimodal distributions, especially when the dimension is high and the modes are isolated. To this end, it exploits and modifies the Riemannian geometric properties of the target distribution to create wormholes connecting modes in order to facilitate moving between them. Further, our proposed method uses the regeneration technique in order to adapt the algorithm by identifying new modes and updating the network of wormholes without affecting the stationary distribution. To find new modes, as opposed to redis-covering those previously identified, we employ a novel mode searching algorithm that explores a residual energy function obtained by subtracting an approximate Gaussian mixture density (based on previously discovered modes) from the target density function.
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...
International Nuclear Information System (INIS)
Petrizzi, L.; Batistoni, P.; Migliori, S.; Chen, Y.; Fischer, U.; Pereslavtsev, P.; Loughlin, M.; Secco, A.
2003-01-01
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
International Nuclear Information System (INIS)
Bieda, Bogusław
2014-01-01
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. - Highlights: • The benefits of Monte Carlo simulation are examined. • The normal probability distribution is studied. • LCI data on Mittal Steel Poland (MSP) complex in Kraków, Poland dates back to 2005. • This is the first assessment of the LCI uncertainties in the Polish steel industry
International Nuclear Information System (INIS)
Bakht, M.K.; Haddadi, A.; Sadeghi, M.; Ahmadi, S.J.; Sadjadi, S.S.; Tenreiro, C.
2013-01-01
Previously, a promising β - -emitting praseodymium-142 glass seed was proposed for brachytherapy of prostate cancer. In accordance with the previous study, a 142 Pr capillary tube-based radioactive implant (CTRI) was suggested as a source with a new structure to enhance application of β - -emitting radioisotopes such as 142 Pr in brachytherapy. Praseodymium oxide powder was encapsulated in a glass capillary tube. Then, a thin and flexible fluorinated ethylene propylene Teflon layer sealed the capillary tube. The source was activated in the Tehran Research Reactor by the 141 Pr(n, γ) 142 Pr reaction. Measurements of the dosimetric parameters were performed using GafChromic radiochromic film. In addition, the dose rate distribution of 142 Pr CTRI was calculated by modeling 142 Pr source in a water phantom using Monte Carlo N-Particle Transport (MCNP5) Code. The active source was unreactive and did not leak in water. In comparison with the earlier proposed 142 Pr seed, the suggested source showed similar desirable dosimetric characteristics. Moreover, the 142 Pr CTRI production procedure may be technically and economically more feasible. The mass of praseodymium in CTRI structure could be greater than that of the 142 Pr glass seed; therefore, the required irradiation time and the neutron flux could be reduced. A 142 Pr CTRI was proposed for brachytherapy of prostate cancer. The dosimetric calculations by the experimental measurements and Monte Carlo simulation were performed to fulfill the requirements according to the American Association of Physicists in Medicine recommendations before the clinical use of new brachytherapy sources. The characteristics of the suggested source were compared with those of the previously proposed 142 Pr glass seed. (author)
Energy Technology Data Exchange (ETDEWEB)
Bieda, Bogusław
2014-05-01
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. - Highlights: • The benefits of Monte Carlo simulation are examined. • The normal probability distribution is studied. • LCI data on Mittal Steel Poland (MSP) complex in Kraków, Poland dates back to 2005. • This is the first assessment of the LCI uncertainties in the Polish steel industry.
Enhancements in Continuous-Energy Monte Carlo Capabilities for SCALE 6.2
Energy Technology Data Exchange (ETDEWEB)
Rearden, Bradley T [ORNL; Petrie Jr, Lester M [ORNL; Peplow, Douglas E. [ORNL; Bekar, Kursat B [ORNL; Wiarda, Dorothea [ORNL; Celik, Cihangir [ORNL; Perfetti, Christopher M [ORNL; Dunn, Michael E [ORNL
2014-01-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, industry, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a plug-and-play framework that includes three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 provides several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, sensitivity and uncertainty analysis, and improved fidelity in nuclear data libraries. A brief overview of SCALE capabilities is provided with emphasis on new features for SCALE 6.2.
Wu, Kui; Li, Guangjun; Bai, Sen
2012-06-01
This paper is to investigate how the different energy impact the accuracy of X-ray Voxel Monte Carlo (XVMC) algorithm when it is applied for dose calculation in Kilovoltage cone beam CT(kv-CBCT) images. The CIRS model 062 was used to calibrate the CT numbers-relative electron density table of CT and CBCT images. CT and CBCT scans were performed when simulation model of human head-and-neck placed in same position to simulate locally advanced nasopharyngeal carcinoma. 6MV and 15MV photon were selected in Monaco TPS to design intensity-modulated radiotherapy (IMRT) plans. XVMC algorithm was selected for dose calculation then the calculation results were compared and the impact of energy on the calculation accuracy was analyzed. The comparison results of dose volume histograms (DVHs), dose received by targets, organs at risk, conform index and uniform index of targets indicate a high agreement between CT based and CBCT based plans. More evaluation indicators show higher accuracy when 15MV photon was selected for dose calculation. gamma index analysis with the criterion of 2mm/2% and threshold of 10% was used for comparison of dose distribution. The average pass rate of each plane was 99.3% +/- 0.47% on the base of 6MV and 99.4% +/- 0.44% on the base of 15MV. CBCT images after calibration has high accuracy of dose calculation and has higher accuracy when 15MV photon was selected.
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Directory of Open Access Journals (Sweden)
P. Li
2013-01-01
Full Text Available The growth of global population and economy continually increases the waste volumes and consequently creates challenges to handle and dispose solid wastes. It becomes more challenging in mixed rural-urban areas (i.e., areas of mixed land use for rural and urban purposes where both agricultural waste (e.g., manure and municipal solid waste are generated. The efficiency and confidence of decisions in current management practices significantly rely on the accurate information and subjective judgments, which are usually compromised by uncertainties. This study proposed a resource-oriented solid waste management system for mixed rural-urban areas. The system is featured by a novel Monte Carlo simulation-based fuzzy programming approach. The developed system was tested by a real-world case with consideration of various resource-oriented treatment technologies and the associated uncertainties. The modeling results indicated that the community-based bio-coal and household-based CH4 facilities were necessary and would become predominant in the waste management system. The 95% confidence intervals of waste loadings to the CH4 and bio-coal facilities were 387, 450 and 178, 215 tonne/day (mixed flow, respectively. In general, the developed system has high capability in supporting solid waste management for mixed rural-urban areas in a cost-efficient and sustainable manner under uncertainty.
Liu, Qinming; Dong, Ming; Peng, Ying
2012-10-01
Health prognosis of equipment is considered as a key process of the condition based maintenance strategy. It contributes to reduce the related risks and the maintenance costs of equipment and improve the availability, the reliability and the security of equipment. However, equipment often operates under dynamically operational and environmental conditions, and its lifetime is generally described by the monitored nonlinear time-series data. Equipment subjects to high levels of uncertainty and unpredictability so that effective methods for its online health prognosis are still in need now. This paper addresses prognostic methods based on hidden semi-Markov model (HSMM) by using sequential Monte Carlo (SMC) method. HSMM is applied to obtain the transition probabilities among health states and the state durations. The SMC method is adopted to describe the probability relationships between health states and the monitored observations of equipment. This paper proposes a novel multi-step-ahead health recognition algorithm based on joint probability distribution to recognize the health states of equipment and its health state change point. A new online health prognostic method is also developed to estimate the residual useful lifetime (RUL) values of equipment. At the end of the paper, a real case study is used to demonstrate the performance and potential applications of the proposed methods for online health prognosis of equipment.
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Monte Carlo codes and Monte Carlo simulator program
International Nuclear Information System (INIS)
Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.
1990-03-01
Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)
SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification
International Nuclear Information System (INIS)
Folkerts, M; Graves, Y; Tian, Z; Gu, X; Jia, X; Jiang, S
2014-01-01
Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is able to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA
SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification
Energy Technology Data Exchange (ETDEWEB)
Folkerts, M [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States); University of California, San Diego, La Jolla, CA (United States); Graves, Y [University of California, San Diego, La Jolla, CA (United States); Tian, Z; Gu, X; Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)
2014-06-01
Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is able to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.
2009-01-01
Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...
Running Out Of and Into Oil. Analyzing Global Oil Depletion and Transition Through 2050
Energy Technology Data Exchange (ETDEWEB)
Greene, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hopson, Janet L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Jia [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2003-10-01
This report presents a risk analysis of world conventional oil resource production, depletion, expansion, and a possible transition to unconventional oil resources such as oil sands, heavy oil and shale oil over the period 2000 to 2050. Risk analysis uses Monte Carlo simulation methods to produce a probability distribution of outcomes rather than a single value.
Haynes, Ashleigh; Kemps, Eva; Moffitt, Robyn
2016-11-01
The process model proposes that the ego depletion effect is due to (a) an increase in motivation toward indulgence, and (b) a decrease in motivation to control behaviour following an initial act of self-control. In contrast, the reflective-impulsive model predicts that ego depletion results in behaviour that is more consistent with desires, and less consistent with motivations, rather than influencing the strength of desires and motivations. The current study sought to test these alternative accounts of the relationships between ego depletion, motivation, desire, and self-control. One hundred and fifty-six undergraduate women were randomised to complete a depleting e-crossing task or a non-depleting task, followed by a lab-based measure of snack intake, and self-report measures of motivation and desire strength. In partial support of the process model, ego depletion was related to higher intake, but only indirectly via the influence of lowered motivation. Motivation was more strongly predictive of intake for those in the non-depletion condition, providing partial support for the reflective-impulsive model. Ego depletion did not affect desire, nor did depletion moderate the effect of desire on intake, indicating that desire may be an appropriate target for reducing unhealthy behaviour across situations where self-control resources vary. © 2016 The International Association of Applied Psychology.
Abhinav, S.; Manohar, C. S.
2018-03-01
The problem of combined state and parameter estimation in nonlinear state space models, based on Bayesian filtering methods, is considered. A novel approach, which combines Rao-Blackwellized particle filters for state estimation with Markov chain Monte Carlo (MCMC) simulations for parameter identification, is proposed. In order to ensure successful performance of the MCMC samplers, in situations involving large amount of dynamic measurement data and (or) low measurement noise, the study employs a modified measurement model combined with an importance sampling based correction. The parameters of the process noise covariance matrix are also included as quantities to be identified. The study employs the Rao-Blackwellization step at two stages: one, associated with the state estimation problem in the particle filtering step, and, secondly, in the evaluation of the ratio of likelihoods in the MCMC run. The satisfactory performance of the proposed method is illustrated on three dynamical systems: (a) a computational model of a nonlinear beam-moving oscillator system, (b) a laboratory scale beam traversed by a loaded trolley, and (c) an earthquake shake table study on a bending-torsion coupled nonlinear frame subjected to uniaxial support motion.
Kussmann, Jörg; Ochsenfeld, Christian
2008-04-07
A reformulation of the fixed-node diffusion quantum Monte Carlo method (FN-DQMC) in terms of the N-particle density matrix is presented, which allows us to reduce the computational effort to linear for the evaluation of the local energy. The reformulation is based on our recently introduced density matrix-based approach for a linear-scaling variational QMC method [J. Kussmann et al., Phys. Rev. B. 75, 165107 (2007)]. However, within the latter approach of using the positive semi-definite N-particle trial density (rhoN T(R)=mid R:Psi(T)(R)mid R:(2)), the nodal information of the trial function is lost. Therefore, a straightforward application to the FN-DQMC method is not possible, in which the sign of the trial function is usually traced in order to confine the random walkers to their nodal pockets. As a solution, we reformulate the FN-DQMC approach in terms of off-diagonal elements of the N-particle density matrix rhoN T(R;R'), so that the nodal information of the trial density matrix is obtained. Besides all-electron moves, a scheme to perform single-electron moves within N-PDM QMC is described in detail. The efficiency of our method is illustrated for exemplary calculations.
Siegal, Mark L; Masel, Joanna
2012-01-01
Abstract Hsp90 reveals phenotypic variation in the laboratory, but is Hsp90 depletion important in the wild? Recent work from Chen and Wagner in BMC Evolutionary Biology has discovered a naturally occurring Drosophila allele that downregulates Hsp90, creating sensitivity to cryptic genetic variation. Laboratory studies suggest that the exact magnitude of Hsp90 downregulation is important. Extreme Hsp90 depletion might reactivate transposable elements and/or induce aneuploidy, in addition to r...
International Nuclear Information System (INIS)
Yeh, C.Y.; Tung, C.J.; Lee, C.C.; Lin, M.H.; Chao, T.C.
2014-01-01
Measurement-based Monte Carlo (MBMC) simulation using a high definition (HD) phantom was used to evaluate the dose distribution in nasopharyngeal cancer (NPC) patients treated with intensity modulated radiation therapy (IMRT). Around nasopharyngeal cavity, there exists many small volume organs-at-risk (OARs) such as the optic nerves, auditory nerves, cochlea, and semicircular canal which necessitate the use of a high definition phantom for accurate and correct dose evaluation. The aim of this research was to study the advantages of using an HD phantom for MBMC simulation in NPC patients treated with IMRT. The MBMC simulation in this study was based on the IMRT treatment plan of three NPC patients generated by the anisotropic analytical algorithm (AAA) of the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA, USA) using a calculation grid of 2 mm 2 . The NPC tumor was treated to a cumulative dose of 7000 cGy in 35 fractions using the shrinking-field sequential IMRT (SIMRT) method. The BEAMnrc MC Code was used to simulate a Varian EX21 linear accelerator treatment head. The HD phantom contained 0.5 × 0.5 × 1 mm 3 voxels for the nasopharyngeal area and 0.5 × 0.5 × 3 mm 3 for the rest of the head area. An efficiency map was obtained for the amorphous silicon aS1000 electronic portal imaging device (EPID) to adjust the weighting of each particle in the phase-space file for each IMRT beam. Our analysis revealed that small volume organs such as the eighth cranial nerve, semicircular canal, cochlea and external auditory canal showed an absolute dose difference of ≥200 cGy, while the dose difference for larger organs such as the parotid glands and tumor was negligible for the MBMC simulation using the HD phantom. The HD phantom was found to be suitable for Monte Carlo dose volume analysis of small volume organs. - Highlights: • HD dose evaluation for IMRT of NPC patients have been verified by the MC method. • MC results shows
東條, 匡志; tojo, masashi
2007-01-01
In this study, a BWR core calculation method is developed. The continuous energy Monte Carlo burn-up calculation code is newly applied to BWR assembly calculations of production level. The applicability of the present new calculation method is verified through the tracking-calculation of commercial BWR.The mechanism and quantitative effects of the error propagations, the spatial discretization and of the temperature distribution in fuel pellet on the Monte Carlo burn-up calculations are clari...
[Acute tryptophan depletion in eating disorders].
Díaz-Marsa, M; Lozano, C; Herranz, A S; Asensio-Vegas, M J; Martín, O; Revert, L; Saiz-Ruiz, J; Carrasco, J L
2006-01-01
This work describes the rational bases justifying the use of acute tryptophan depletion technique in eating disorders (ED) and the methods and design used in our studies. Tryptophan depletion technique has been described and used in previous studies safely and makes it possible to evaluate the brain serotonin activity. Therefore it is used in the investigation of hypotheses on serotonergic deficiency in eating disorders. Furthermore, and given the relationship of the dysfunctions of serotonin activity with impulsive symptoms, the technique may be useful in biological differentiation of different subtypes, that is restrictive and bulimic, of ED. 57 female patients with DSM-IV eating disorders and 20 female controls were investigated with the tryptophan depletion test. A tryptophan-free amino acid solution was administered orally after a two-day low tryptophan diet to patients and controls. Free plasma tryptophan was measured at two and five hours following administration of the drink. Eating and emotional responses were measured with specific scales for five hours following the depletion. A study of the basic characteristics of the personality and impulsivity traits was also done. Relationship of the response to the test with the different clinical subtypes and with the temperamental and impulsive characteristics of the patients was studied. The test was effective in considerably reducing plasma tryptophan in five hours from baseline levels (76%) in the global sample. The test was well tolerated and no severe adverse effects were reported. Two patients withdrew from the test due to gastric intolerance. The tryptophan depletion test could be of value to study involvement of serotonin deficits in the symptomatology and pathophysiology of eating disorders.
Pain, F.; Dhenain, M.; Gurden, H.; Routier, A. L.; Lefebvre, F.; Mastrippolito, R.; Lanièce, P.
2008-10-01
The β-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using 18F, 11C and 15O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous β-microprobe reports. In the context of realistic experiments (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with 18F-FDG and H2O15 blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed β-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the different sources of beta signal. Finally, since stereotaxic accuracy is
Energy Technology Data Exchange (ETDEWEB)
Pain, F; Dhenain, M; Gurden, H; Routier, A L; Lefebvre, F; Mastrippolito, R; Laniece, P [UMR8165 Imagerie et Modelisation en Cancerologie et Neurobiologie, Universites Paris 11/Paris 7, Campus d' Orsay Bat 104/440 91406 Orsay Cedex (France)
2008-10-07
The {beta}-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using {sup 18}F, {sup 11}C and {sup 15}O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous {beta}-microprobe reports. In the context of realistic experiments (binding of {sup 11}C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with {sup 18}F-FDG and H{sub 2}O{sup 15} blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed {beta}-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the
Energy Technology Data Exchange (ETDEWEB)
Chibani, O; Tahanout, F; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)
2016-06-15
Purpose: To commission a new MLC model for the GEPTS Monte Carlo system. The model is based on the concept of leaves and interleaves effective densities Methods: GEPTS is a Monte Carlo system to be used for external beam planning verification. GEPTS incorporates detailed photon and electron transport algorithms (Med.Phys. 29, 2002, 835). A new GEPTS model for the Varian Millennium MLC is presented. The model accounts for: 1) thick (1 cm) and thin (0.5 cm) leaves, 2) tongue-and-groove design, 3) High-Transmission (HT) and Low-Transmission (LT) interleaves, and 4) rounded leaf end. Leaf (and interleaf) height is set equal to 6 cm. Instead of modeling air gaps, screw holes, and complex leaf heads, “effective densities” are assigned to: 1) thin leaves, 2) thick leaves, 3) HT-, and 4) LT-interleaves. Results: The new MLC model is used to calculate dose profiles for Closed-MLC and Tongue-and-Groove fields at 5 cm depth for 6, 10 and 15 MV Varian beams. Calculations are compared with 1) Pin-point ionization chamber transmission ratios and 2) EBT3 Radiochromic films. Pinpoint readings were acquired beneath thick and thin leaves, and HT and LT interleaves. The best fit of measured dose profiles was obtained for the following parameters: Thick-leaf density = 16.1 g/cc, Thin-leaf density = 17.2 g/cc; HT Interleaf density = 12.4 g/cc, LT Interleaf density = 14.3 g/cc; Interleaf thickness = 1.1 mm. Attached figures show comparison of calculated and measured transmission ratios for the 3 energies. Note this is the only study where transmission profiles are compared with measurements for 3 different energies. Conclusion: The new MLC model reproduces transmission measurements within 0.1%. The next step is to implement the MLC model for real plans and quantify the improvement in dose calculation accuracy gained using this model for IMRT plans with high modulation factors.
Depleted uranium plasma reduction system study
International Nuclear Information System (INIS)
Rekemeyer, P.; Feizollahi, F.; Quapp, W.J.; Brown, B.W.
1994-12-01
A system life-cycle cost study was conducted of a preliminary design concept for a plasma reduction process for converting depleted uranium to uranium metal and anhydrous HF. The plasma-based process is expected to offer significant economic and environmental advantages over present technology. Depleted Uranium is currently stored in the form of solid UF 6 , of which approximately 575,000 metric tons is stored at three locations in the U.S. The proposed system is preconceptual in nature, but includes all necessary processing equipment and facilities to perform the process. The study has identified total processing cost of approximately $3.00/kg of UF 6 processed. Based on the results of this study, the development of a laboratory-scale system (1 kg/h throughput of UF6) is warranted. Further scaling of the process to pilot scale will be determined after laboratory testing is complete
Sui, Xiaonan; Zhou, Weibiao
2014-04-01
The non-isothermal degradation of two cyanidin-based anthocyanins, cyanidin-3-glucoside and cyanidin-3-rutinoside, was investigated in aqueous system within the temperature range from 100 to 165 °C, and the degradation kinetics was modelled using Monte Carlo simulation. The two anthocyanins showed different stability, with cyanidin-3-glucoside exhibiting a higher degradation rate than cyanidin-3-rutinoside. The derived degradation rate at the reference temperature of 132.5 °C and activation energy of cyanidin-3-glucoside and cyanidin-3-rutinoside were 0.0047 and 0.0023 s(-1), and 87 and 104 kJ/mol, respectively. The antioxidant capacity of the thermally processed anthocyanins solutions was measured by using ABTS and DPPH assays. Results showed that the antioxidant capacity of the samples remained at the same level during the thermal treatment process under various conditions, i.e., there was no significant difference (P>0.05) in the antioxidant capacity amongst the samples despite their significantly different contents of the two anthocyanins. Copyright © 2013 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Chalinee Thanasupsombat
2018-01-01
Full Text Available The quality of images obtained from cone-beam computed tomography (CBCT is important in diagnosis and treatment planning for dental and maxillofacial applications. However, X-ray scattering inside a human head is one of the main factors that cause a drop in image quality, especially in the CBCT system with a wide-angle cone-beam X-ray source and a large area detector. In this study, the X-ray scattering distribution within a standard head phantom was estimated using the Monte Carlo method based on Geant4. Due to small variation of low-frequency scattering signals, the scattering signals from the head phantom can be represented as the simple predetermined scattering signals from a patient’s head and subtracted the projection data for scatter reduction. The results showed higher contrast and less cupping artifacts on the reconstructed images of the head phantom and real patients. Furthermore, the same simulated scattering signals can also be applied to process with higher-resolution projection data.
Sahu, Nityananda; Gadre, Shridhar R; Rakshit, Avijit; Bandyopadhyay, Pradipta; Miliordos, Evangelos; Xantheas, Sotiris S
2014-10-28
We report new global minimum candidate structures for the (H2O)25 cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving sampling of the cluster's Potential Energy Surface with the Effective Fragment Potential, subsequent geometry optimization using the Molecular Tailoring Approach with the fragments treated at the second order Møller-Plesset (MP2) perturbation (MTA-MP2) and final refinement of the entire cluster at the MP2 level of theory. The MTA-MP2 optimized cluster geometries, constructed from the fragments, were found to be within <0.5 kcal/mol from the minimum geometries obtained from the MP2 optimization of the entire (H2O)25 cluster. In addition, the grafting of the MTA-MP2 energies yields electronic energies that are within <0.3 kcal/mol from the MP2 energies of the entire cluster while preserving their energy rank order. Finally, the MTA-MP2 approach was found to reproduce the MP2 harmonic vibrational frequencies, constructed from the fragments, quite accurately when compared to the MP2 ones of the entire cluster in both the HOH bending and the OH stretching regions of the spectra.
International Nuclear Information System (INIS)
Sahu, Nityananda; Gadre, Shridhar R.; Rakshit, Avijit; Bandyopadhyay, Pradipta; Miliordos, Evangelos; Xantheas, Sotiris S.
2014-01-01
We report new global minimum candidate structures for the (H 2 O) 25 cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving sampling of the cluster's Potential Energy Surface with the Effective Fragment Potential, subsequent geometry optimization using the Molecular Tailoring Approach with the fragments treated at the second order Møller-Plesset (MP2) perturbation (MTA-MP2) and final refinement of the entire cluster at the MP2 level of theory. The MTA-MP2 optimized cluster geometries, constructed from the fragments, were found to be within 2 O) 25 cluster. In addition, the grafting of the MTA-MP2 energies yields electronic energies that are within <0.3 kcal/mol from the MP2 energies of the entire cluster while preserving their energy rank order. Finally, the MTA-MP2 approach was found to reproduce the MP2 harmonic vibrational frequencies, constructed from the fragments, quite accurately when compared to the MP2 ones of the entire cluster in both the HOH bending and the OH stretching regions of the spectra
International Nuclear Information System (INIS)
Vincent, E.; Domain, C.; Vincent, E.; Becquart, C.S.
2008-01-01
Full text of publication follows. The embrittlement and the hardening of pressure vessel steels under radiation has been correlated with the presence solutes such as Cu, Ni, Mn and Si. Indeed it has been observed that under irradiation, these solutes tend to gather to form more or less dilute clusters. The interactions of these solutes with radiation induced point defects thus need to be characterised properly in order to understand the elementary mechanisms behind the formation of these clusters. Ab initio calculations based on the density functional theory have been performed to determine the interactions of point defects (vacancies as well as interstitials) with solute atoms in dilute FeX alloys (X Cu, Mn, Ni or Si) in order to build a database used to parameterize an atomic kinetic Monte Carlo model. The model has been applied to simulate thermal ageing as well as irradiation conditions in dilute Fe-CuNiMnSi alloys. Results obtained with this model will be presented. (authors)
Directory of Open Access Journals (Sweden)
Boris Fomin
2012-10-01
Full Text Available This paper presents a new version of radiative transfer model called the Fast Line-by-Line Model (FLBLM, which is based on the Line-by-Line (LbL and Monte Carlo (MC methods and rigorously treats particulate and molecular scattering alongside absorption. The advantage of this model consists in the use of the line-by-line model that allows for the computing of high-resolution spectra quite quickly. We have developed the model by taking into account the polarization state of light and carried out some validations by comparison against benchmark results. FLBLM calculates the Stokes parameters spectra of shortwave radiation in vertically inhomogeneous atmospheres. This update makes the model applicable for the assessment of cloud and aerosol influence on radiances as measured by the SW high-resolution polarization spectrometers. In sample results we demonstrate that the high-resolution spectra of the Stokes parameters contain more detailed information about clouds and aerosols than the medium- and low-resolution spectra wherein lines are not resolved. The presented model is rapid enough for many practical applications (e.g., validations and might be useful especially for the remote sensing. FLBLM is suitable for development of the reliable technique for retrieval of optical and microphysical properties of clouds and aerosols from high-resolution satellites data.
Shao, Dongguo; Yang, Haidong; Xiao, Yi; Liu, Biyu
2014-01-01
A new method is proposed based on the finite difference method (FDM), differential evolution algorithm and Markov Chain Monte Carlo (MCMC) simulation to identify water quality model parameters of an open channel in a long distance water transfer project. Firstly, this parameter identification problem is considered as a Bayesian estimation problem and the forward numerical model is solved by FDM, and the posterior probability density function of the parameters is deduced. Then these parameters are estimated using a sampling method with differential evolution algorithm and MCMC simulation. Finally this proposed method is compared with FDM-MCMC by a twin experiment. The results show that the proposed method can be used to identify water quality model parameters of an open channel in a long distance water transfer project under different scenarios better with fewer iterations, higher reliability and anti-noise capability compared with FDM-MCMC. Therefore, it provides a new idea and method to solve the traceability problem in sudden water pollution accidents.
Kadoura, Ahmad Salim
2013-06-01
In this work, a method to estimate solid elemental sulfur solubility in pure and gas mixtures using Monte Carlo (MC) molecular simulation is proposed. This method is based on Isobaric-Isothermal (NPT) ensemble and the Widom insertion technique for the gas phase and a continuum model for the solid phase. This method avoids the difficulty of having to deal with high rejection rates that are usually encountered when simulating using Gibbs ensemble. The application of this method is tested with a system made of pure hydrogen sulfide gas (H2S) and solid elemental sulfur. However, this technique may be used for other solid-vapor systems provided the fugacity of the solid phase is known (e.g., through experimental work). Given solid fugacity at the desired pressure and temperature, the mole fraction of the solid dissolved in gas that would be in chemical equilibrium with the solid phase might be obtained. In other words a set of MC molecular simulation experiments is conducted on a single box given the pressure and temperature and for different mole fractions of the solute. The fugacity of the gas mixture is determined using the Widom insertion method and is compared with that predetermined for the solid phase until one finds the mole fraction which achieves the required fugacity. In this work, several examples of MC have been conducted and compared with experimental data. The Lennard-Jones parameters related to the sulfur molecule model (ɛ, σ) have been optimized to achieve better match with the experimental work.
Successful vectorization - reactor physics Monte Carlo code
International Nuclear Information System (INIS)
Martin, W.R.
1989-01-01
Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)
Variational Monte Carlo Technique
Indian Academy of Sciences (India)
ias
nonprobabilistic) problem [5]. ... In quantum mechanics, the MC methods are used to simulate many-particle systems us- ing random ...... D Ceperley, G V Chester and M H Kalos, Monte Carlo simulation of a many-fermion study, Physical Review Vol.
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.
Leonardo Rossi
Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...
International Nuclear Information System (INIS)
Jin, L; Eldib, A; Li, J; Price, R; Ma, C
2015-01-01
Purpose: Uneven nose surfaces and air cavities underneath and the use of bolus present complexity and dose uncertainty when using a single electron energy beam to plan treatments of nose skin with a pencil beam-based planning system. This work demonstrates more accurate dose calculation and more optimal planning using energy and intensity modulated electron radiotherapy (MERT) delivered with a pMLC. Methods: An in-house developed Monte Carlo (MC)-based dose calculation/optimization planning system was employed for treatment planning. Phase space data (6, 9, 12 and 15 MeV) were used as an input source for MC dose calculations for the linac. To reduce the scatter-caused penumbra, a short SSD (61 cm) was used. Our previous work demonstrates good agreement in percentage depth dose and off-axis dose between calculations and film measurement for various field sizes. A MERT plan was generated for treating the nose skin using a patient geometry and a dose volume histogram (DVH) was obtained. The work also shows the comparison of 2D dose distributions between a clinically used conventional single electron energy plan and the MERT plan. Results: The MERT plan resulted in improved target dose coverage as compared to the conventional plan, which demonstrated a target dose deficit at the field edge. The conventional plan showed higher dose normal tissue irradiation underneath the nose skin while the MERT plan resulted in improved conformity and thus reduces normal tissue dose. Conclusion: This preliminary work illustrates that MC-based MERT planning is a promising technique in treating nose skin, not only providing more accurate dose calculation, but also offering an improved target dose coverage and conformity. In addition, this technique may eliminate the necessity of bolus, which often produces dose delivery uncertainty due to the air gaps that may exist between the bolus and skin
Energy Technology Data Exchange (ETDEWEB)
Guberina, Nika; Suntharalingam, Saravanabavaan; Nassenstein, Kai; Forsting, Michael; Theysohn, Jens; Wetter, Axel; Ringelstein, Adrian [University Hospital Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Essen (Germany)
2016-10-15
The aim of this study was to verify the results of a dose monitoring software tool based on Monte Carlo Simulation (MCS) in assessment of eye lens doses for cranial CT scans. In cooperation with the Federal Office for Radiation Protection (Neuherberg, Germany), phantom measurements were performed with thermoluminescence dosimeters (TLD LiF:Mg,Ti) using cranial CT protocols: (I) CT angiography; (II) unenhanced, cranial CT scans with gantry angulation at a single and (III) without gantry angulation at a dual source CT scanner. Eye lens doses calculated by the dose monitoring tool based on MCS and assessed with TLDs were compared. Eye lens doses are summarized as follows: (I) CT angiography (a) MCS 7 mSv, (b) TLD 5 mSv; (II) unenhanced, cranial CT scan with gantry angulation, (c) MCS 45 mSv, (d) TLD 5 mSv; (III) unenhanced, cranial CT scan without gantry angulation (e) MCS 38 mSv, (f) TLD 35 mSv. Intermodality comparison shows an inaccurate calculation of eye lens doses in unenhanced cranial CT protocols at the single source CT scanner due to the disregard of gantry angulation. On the contrary, the dose monitoring tool showed an accurate calculation of eye lens doses at the dual source CT scanner without gantry angulation and for CT angiography examinations. The dose monitoring software tool based on MCS gave accurate estimates of eye lens doses in cranial CT protocols. However, knowledge of protocol and software specific influences is crucial for correct assessment of eye lens doses in routine clinical use. (orig.)
Usmani, Muhammad Nauman; Takegawa, Hideki; Takashina, Masaaki; Numasaki, Hodaka; Suga, Masaki; Anetai, Yusuke; Kurosu, Keita; Koizumi, Masahiko; Teshima, Teruki
2014-11-01
Technical developments in radiotherapy (RT) have created a need for systematic quality assurance (QA) to ensure that clinical institutions deliver prescribed radiation doses consistent with the requirements of clinical protocols. For QA, an ideal dose verification system should be independent of the treatment-planning system (TPS). This paper describes the development and reproducibility evaluation of a Monte Carlo (MC)-based standard LINAC model as a preliminary requirement for independent verification of dose distributions. The BEAMnrc MC code is used for characterization of the 6-, 10- and 15-MV photon beams for a wide range of field sizes. The modeling of the LINAC head components is based on the specifications provided by the manufacturer. MC dose distributions are tuned to match Varian Golden Beam Data (GBD). For reproducibility evaluation, calculated beam data is compared with beam data measured at individual institutions. For all energies and field sizes, the MC and GBD agreed to within 1.0% for percentage depth doses (PDDs), 1.5% for beam profiles and 1.2% for total scatter factors (Scps.). Reproducibility evaluation showed that the maximum average local differences were 1.3% and 2.5% for PDDs and beam profiles, respectively. MC and institutions' mean Scps agreed to within 2.0%. An MC-based standard LINAC model developed to independently verify dose distributions for QA of multi-institutional clinical trials and routine clinical practice has proven to be highly accurate and reproducible and can thus help ensure that prescribed doses delivered are consistent with the requirements of clinical protocols. © The Author 2014. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Monte Carlo-based validation of the ENDF/MC2-II/SDX cell homogenization path
International Nuclear Information System (INIS)
Wade, D.C.
1979-04-01
The results are presented of a program of validation of the unit cell homogenization prescriptions and codes used for the analysis of Zero Power Reactor (ZPR) fast breeder reactor critical experiments. The ZPR drawer loading patterns comprise both plate type and pin-calandria type unit cells. A prescription is used to convert the three dimensional physical geometry of the drawer loadings into one dimensional calculational models. The ETOE-II/MC 2 -II/SDX code sequence is used to transform ENDF/B basic nuclear data into unit cell average broad group cross sections based on the 1D models. Cell average, broad group anisotropic diffusion coefficients are generated using the methods of Benoist or of Gelbard. The resulting broad (approx. 10 to 30) group parameters are used in multigroup diffusion and S/sub n/ transport calculations of full core XY or RZ models which employ smeared atom densities to represent the contents of the unit cells
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Chang, Shu-Jun; Hsu, Jui-Ting; Hung, Shih-Yen; Liu, Yan-Lin; Jiang, Shiang-Huei; Wu, Jay
2017-05-01
Reference phantoms are widely applied to evaluate the radiation dose for external exposure. However, the frequently used reference phantoms are based on Caucasians. Dose estimation for Asians using a Caucasian phantom can result in significant errors. This study recruited 40 volunteers whose body sizes are close to the average Taiwanese population. Magnetic resonance imaging was performed to obtain the organ volume for construction of the Taiwanese reference man (TRM) and Taiwanese reference woman (TRW). The dose conversion coefficients (DCC) resulting from photo beams in anterior-posterior, posterior-anterior, right-lateral, left-lateral, and isotropic irradiation geometries were estimated. In the anterior-posterior geometry, the mean DCC differences among organs between the TRM and ORNL phantom at 0.1, 1, and 10 MeV were 7.3%, 5.8%, and 5.2%, respectively. For the TRW, the mean differences from the ORNL phantom at the three energies were 10.6%, 7.4%, and 8.3%. The DCCs of the Taiwanese reference phantoms and the ORNL phantom presented similar trends in other geometries. The torso size of the phantom and the mass and geometric location of the organ have a significant influence on the DCC. The Taiwanese reference phantoms can be used to establish dose guidelines and regulations for radiation protection from external exposure.
Adinehvand, Karim; Rahatabad, Fereidoun Nowshiravan
2018-06-01
Calculation of 3D dose distribution during radiotherapy and nuclear medicine helps us for better treatment of sensitive organs such as ovaries and uterus. In this research, we investigate two groups of normoxic dosimeters based on meta-acrylic acid (MAGIC and MAGICAUG) and polyacrylamide (PAGATUG and PAGATAUG) for brachytherapy, nuclear medicine and Tele-therapy in their sensitive and critical role as organ dosimeters. These polymer gel dosimeters are compared with soft tissue while irradiated by different energy photons in therapeutic applications. This comparison has been simulated by Monte-Carlo based MCNPX code. ORNL phantom-Female has been used to model the critical organs of kidneys, ovaries and uterus. Right kidney is proposed to be the source of irradiation and another two organs are exposed to this irradiation. Effective atomic numbers of soft tissue, MAGIC, MAGICAUG, PAGATUG and PAGATAUG are 6.86, 7.07, 6.95, 7.28, and 7.07 respectively. Results show the polymer gel dosimeters are comparable to soft tissue for using in nuclear medicine and Tele-therapy. Differences between gel dosimeters and soft tissue are defined as the dose responses. This difference is less than 4.1%, 22.6% and 71.9% for Tele-therapy, nuclear medicine and brachytherapy respectively. The results approved that gel dosimeters are the best choice for ovaries and uterus in nuclear medicine and Tele-therapy respectively. Due to the slight difference between the effective atomic numbers of these polymer gel dosimeters and soft tissue, these polymer gels are not suitable for brachytherapy since the dependence of photon interaction to atomic number, for low energy brachytherapy, had been so effective. Also this dependence to atomic number, decrease for photoelectric and increase for Compton. Therefore polymer gel dosimeters are not a good alternative to soft tissue replacement in brachytherapy. Copyright © 2018 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Siegal Mark L
2012-02-01
Full Text Available Abstract Hsp90 reveals phenotypic variation in the laboratory, but is Hsp90 depletion important in the wild? Recent work from Chen and Wagner in BMC Evolutionary Biology has discovered a naturally occurring Drosophila allele that downregulates Hsp90, creating sensitivity to cryptic genetic variation. Laboratory studies suggest that the exact magnitude of Hsp90 downregulation is important. Extreme Hsp90 depletion might reactivate transposable elements and/or induce aneuploidy, in addition to revealing cryptic genetic variation. See research article http://wwww.biomedcentral.com/1471-2148/12/25
Improvements in EBR-2 core depletion calculations
International Nuclear Information System (INIS)
Finck, P.J.; Hill, R.N.; Sakamoto, S.
1991-01-01
The need for accurate core depletion calculations in Experimental Breeder Reactor No. 2 (EBR-2) is discussed. Because of the unique physics characteristics of EBR-2, it is difficult to obtain accurate and computationally efficient multigroup flux predictions. This paper describes the effect of various conventional and higher order schemes for group constant generation and for flux computations; results indicate that higher-order methods are required, particularly in the outer regions (i.e. the radial blanket). A methodology based on Nodal Equivalence Theory (N.E.T.) is developed which allows retention of the accuracy of a higher order solution with the computational efficiency of a few group nodal diffusion solution. The application of this methodology to three-dimensional EBR-2 flux predictions is demonstrated; this improved methodology allows accurate core depletion calculations at reasonable cost. 13 refs., 4 figs., 3 tabs
Health and environmental impact of depleted uranium
International Nuclear Information System (INIS)
Furitsu, Katsumi
2010-01-01
Depleted Uranium (DU) is 'nuclear waste' produced from the enrichment process and is mostly made up of 238 U and is depleted in the fissionable isotope 235 U compared to natural uranium (NU). Depleted uranium has about 60% of the radioactivity of natural uranium. Depleted uranium and natural uranium are identical in terms of the chemical toxicity. Uranium's high density gives depleted uranium shells increased range and penetrative power. This density, combined with uranium's pyrophoric nature, results in a high-energy kinetic weapon that can punch and burn through armour plating. Striking a hard target, depleted uranium munitions create extremely high temperatures. The uranium immediately burns and vaporizes into an aerosol, which is easily diffused in the environment. People can inhale the micro-particles of uranium oxide in an aerosol and absorb them mainly from lung. Depleted uranium has both aspects of radiological toxicity and chemical toxicity. The possible synergistic effect of both kinds of toxicities is also pointed out. Animal and cellular studies have been reported the carcinogenic, neurotoxic, immuno-toxic and some other effects of depleted uranium including the damage on reproductive system and foetus. In addition, the health effects of micro/ nano-particles, similar in size of depleted uranium aerosols produced by uranium weapons, have been reported. Aerosolized DU dust can easily spread over the battlefield spreading over civilian areas, sometimes even crossing international borders. Therefore, not only the military personnel but also the civilians can be exposed. The contamination continues after the cessation of hostilities. Taking these aspects into account, DU weapon is illegal under international humanitarian laws and is considered as one of the inhumane weapons of 'indiscriminate destruction'. The international society is now discussing the prohibition of DU weapons based on 'precautionary principle'. The 1991 Gulf War is reportedly the first
Kalos, Melvin H
2008-01-01
This introduction to Monte Carlo methods seeks to identify and study the unifying elements that underlie their effective application. Initial chapters provide a short treatment of the probability and statistics needed as background, enabling those without experience in Monte Carlo techniques to apply these ideas to their research.The book focuses on two basic themes: The first is the importance of random walks as they occur both in natural stochastic systems and in their relationship to integral and differential equations. The second theme is that of variance reduction in general and importance sampling in particular as a technique for efficient use of the methods. Random walks are introduced with an elementary example in which the modeling of radiation transport arises directly from a schematic probabilistic description of the interaction of radiation with matter. Building on this example, the relationship between random walks and integral equations is outlined
Directory of Open Access Journals (Sweden)
Pedro Medina Avendaño
1981-01-01
Full Text Available Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
International Nuclear Information System (INIS)
Creutz, M.
1986-01-01
The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena
Schmier, Jordana K; Lau, Edmund C; Patel, Jasmine D; Klenk, Juergen A; Greenspon, Arnold J
2017-11-01
The effects of device and patient characteristics on health and economic outcomes in patients with cardiac implantable electronic devices (CIEDs) are unclear. Modeling can estimate costs and outcomes for patients with CIEDs under a variety of scenarios, varying battery longevity, comorbidities, and care settings. The objective of this analysis was to compare changes in patient outcomes and payer costs attributable to increases in battery life of implantable cardiac defibrillators (ICDs) and cardiac resynchronization therapy defibrillators (CRT-D). We developed a Monte Carlo Markov model simulation to follow patients through primary implant, postoperative maintenance, generator replacement, and revision states. Patients were simulated in 3-month increments for 15 years or until death. Key variables included Charlson Comorbidity Index, CIED type, legacy versus extended battery longevity, mortality rates (procedure and all-cause), infection and non-infectious complication rates, and care settings. Costs included procedure-related (facility and professional), maintenance, and infections and non-infectious complications, all derived from Medicare data (2004-2014, 5% sample). Outcomes included counts of battery replacements, revisions, infections and non-infectious complications, and discounted (3%) costs and life years. An increase in battery longevity in ICDs yielded reductions in numbers of revisions (by 23%), battery changes (by 44%), infections (by 23%), non-infectious complications (by 10%), and total costs per patient (by 9%). Analogous reductions for CRT-Ds were 23% (revisions), 32% (battery changes), 22% (infections), 8% (complications), and 10% (costs). Based on modeling results, as battery longevity increases, patients experience fewer adverse outcomes and healthcare costs are reduced. Understanding the magnitude of the cost benefit of extended battery life can inform budgeting and planning decisions by healthcare providers and insurers.
International Nuclear Information System (INIS)
Setiani, Tia Dwi; Suprijadi; Haryanto, Freddy
2016-01-01
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 – 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 10 8 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.
Cros, Maria; Joemai, Raoul M. S.; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal
2017-08-01
This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT
Cros, Maria; Joemai, Raoul M S; Geleijns, Jacob; Molina, Diego; Salvadó, Marçal
2017-07-17
This study aims to develop and test software for assessing and reporting doses for standard patients undergoing computed tomography (CT) examinations in a 320 detector-row cone-beam scanner. The software, called SimDoseCT, is based on the Monte Carlo (MC) simulation code, which was developed to calculate organ doses and effective doses in ICRP anthropomorphic adult reference computational phantoms for acquisitions with the Aquilion ONE CT scanner (Toshiba). MC simulation was validated by comparing CTDI measurements within standard CT dose phantoms with results from simulation under the same conditions. SimDoseCT consists of a graphical user interface connected to a MySQL database, which contains the look-up-tables that were generated with MC simulations for volumetric acquisitions at different scan positions along the phantom using any tube voltage, bow tie filter, focal spot and nine different beam widths. Two different methods were developed to estimate organ doses and effective doses from acquisitions using other available beam widths in the scanner. A correction factor was used to estimate doses in helical acquisitions. Hence, the user can select any available protocol in the Aquilion ONE scanner for a standard adult male or female and obtain the dose results through the software interface. Agreement within 9% between CTDI measurements and simulations allowed the validation of the MC program. Additionally, the algorithm for dose reporting in SimDoseCT was validated by comparing dose results from this tool with those obtained from MC simulations for three volumetric acquisitions (head, thorax and abdomen). The comparison was repeated using eight different collimations and also for another collimation in a helical abdomen examination. The results showed differences of 0.1 mSv or less for absolute dose in most organs and also in the effective dose calculation. The software provides a suitable tool for dose assessment in standard adult patients undergoing CT
Impact of mineral resource depletion
CSIR Research Space (South Africa)
Brent, AC
2006-09-01
Full Text Available In a letter to the editor, the authors comment on BA Steen's article on "Abiotic Resource Depletion: different perceptions of the problem with mineral deposits" published in the special issue of the International Journal of Life Cycle Assessment...
Energy Technology Data Exchange (ETDEWEB)
Brockway, D.; Soran, P.; Whalen, P.
1985-01-01
A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.
Directory of Open Access Journals (Sweden)
Charlie Samuya Veric
2001-12-01
Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...
International Nuclear Information System (INIS)
Rodrigues, A; Wu, Q; Sawkey, D
2016-01-01
Purpose: DEAR is a radiation therapy technique utilizing synchronized motion of gantry and couch during delivery to optimize dose distribution homogeneity and penumbra for treatment of superficial disease. Dose calculation for DEAR is not yet supported by commercial TPSs. The purpose of this study is to demonstrate the feasibility of using a web-based Monte Carlo (MC) simulation tool (VirtuaLinac) to calculate dose distributions for a DEAR delivery. Methods: MC simulations were run through VirtuaLinac, which is based on the GEANT4 platform. VirtuaLinac utilizes detailed linac head geometry and material models, validated phase space files, and a voxelized phantom. The input was expanded to include an XML file for simulation of varying mechanical axes as a function of MU. A DEAR XML plan was generated and used in the MC simulation and delivered on a TrueBeam in Developer Mode. Radiographic film wrapped on a cylindrical phantom (12.5 cm radius) measured dose at a depth of 1.5 cm and compared to the simulation results. Results: A DEAR plan was simulated using an energy of 6 MeV and a 3×10 cm 2 cut-out in a 15×15 cm 2 applicator for a delivery of a 90° arc. The resulting data were found to provide qualitative and quantitative evidence that the simulation platform could be used as the basis for DEAR dose calculations. The resulting unwrapped 2D dose distributions agreed well in the cross-plane direction along the arc, with field sizes of 18.4 and 18.2 cm and penumbrae of 1.9 and 2.0 cm for measurements and simulations, respectively. Conclusion: Preliminary feasibility of a DEAR delivery using a web-based MC simulation platform has been demonstrated. This tool will benefit treatment planning for DEAR as a benchmark for developing other model based algorithms, allowing efficient optimization of trajectories, and quality assurance of plans without the need for extensive measurements.
It is chloride depletion alkalosis, not contraction alkalosis.
Luke, Robert G; Galla, John H
2012-02-01
Maintenance of metabolic alkalosis generated by chloride depletion is often attributed to volume contraction. In balance and clearance studies in rats and humans, we showed that chloride repletion in the face of persisting alkali loading, volume contraction, and potassium and sodium depletion completely corrects alkalosis by a renal mechanism. Nephron segment studies strongly suggest the corrective response is orchestrated in the collecting duct, which has several transporters integral to acid-base regulation, the most important of which is pendrin, a luminal Cl/HCO(3)(-) exchanger. Chloride depletion alkalosis should replace the notion of contraction alkalosis.
International Nuclear Information System (INIS)
Titt, U.; Newhauser, W. D.
2005-01-01
Proton therapy facilities are shielded to limit the amount of secondary radiation to which patients, occupational workers and members of the general public are exposed. The most commonly applied shielding design methods for proton therapy facilities comprise semi-empirical and analytical methods to estimate the neutron dose equivalent. This study compares the results of these methods with a detailed simulation of a proton therapy facility by using the Monte Carlo technique. A comparison of neutron dose equivalent values predicted by the various methods reveals the superior accuracy of the Monte Carlo predictions in locations where the calculations converge. However, the reliability of the overall shielding design increases if simulation results, for which solutions have not converged, e.g. owing to too few particle histories, can be excluded, and deterministic models are being used at these locations. Criteria to accept or reject Monte Carlo calculations in such complex structures are not well understood. An optimum rejection criterion would allow all converging solutions of Monte Carlo simulation to be taken into account, and reject all solutions with uncertainties larger than the design safety margins. In this study, the optimum rejection criterion of 10% was found. The mean ratio was 26, 62% of all receptor locations showed a ratio between 0.9 and 10, and 92% were between 1 and 100. (authors)
Partridge, D.G.; Vrugt, J.A.; Tunved, P.; Ekman, A.M.L.; Struthers, H.; Sorooshian, A.
2012-01-01
This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov Chain Monte Carlo (MCMC) algorithm to a pseudo-adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis
Energy Technology Data Exchange (ETDEWEB)
Mei, Donghai; Neurock, Matthew; Smith, C Michael
2009-10-22
The kinetics for the selective hydrogenation of acetylene-ethylene mixtures over model Pd(111) and bimetallic Pd-Ag alloy surfaces were examined using first principles based kinetic Monte Carlo (KMC) simulations to elucidate the effects of alloying as well as process conditions (temperature and hydrogen partial pressure). The mechanisms that control the selective and unselective routes which included hydrogenation, dehydrogenation and C-C bond breaking pathways were analyzed using first-principle density functional theory (DFT) calculations. The results were used to construct an intrinsic kinetic database that was used in a variable time step kinetic Monte Carlo simulation to follow the kinetics and the molecular transformations in the selective hydrogenation of acetylene-ethylene feeds over Pd and Pd-Ag surfaces. The lateral interactions between coadsorbates that occur through-surface and through-space were estimated using DFT-parameterized bond order conservation and van der Waal interaction models respectively. The simulation results show that the rate of acetylene hydrogenation as well as the ethylene selectivity increase with temperature over both the Pd(111) and the Pd-Ag/Pd(111) alloy surfaces. The selective hydrogenation of acetylene to ethylene proceeds via the formation of a vinyl intermediate. The unselective formation of ethane is the result of the over-hydrogenation of ethylene as well as over-hydrogenation of vinyl to form ethylidene. Ethylidene further hydrogenates to form ethane and dehydrogenates to form ethylidyne. While ethylidyne is not reactive, it can block adsorption sites which limit the availability of hydrogen on the surface and thus act to enhance the selectivity. Alloying Ag into the Pd surface decreases the overall rated but increases the ethylene selectivity significantly by promoting the selective hydrogenation of vinyl to ethylene and concomitantly suppressing the unselective path involving the hydrogenation of vinyl to ethylidene
Petroccia, Heather; Mendenhall, Nancy; Liu, Chihray; Hammer, Clifford; Culberson, Wesley; Thar, Tim; Mitchell, Tom; Li, Zuofeng; Bolch, Wesley
2017-08-01
Historical radiotherapy treatment plans lack 3D images sets required for estimating mean organ doses to patients. Alternatively, Monte Carlo-based models of radiotherapy devices coupled with whole-body computational phantoms can permit estimates of historical in-field and out-of-field organ doses as needed for studies associating radiation exposure and late tissue toxicities. In recreating historical patient treatments with 60Co based systems, the major components to be modeled include the source capsule, surrounding shielding layers, collimators (both fixed and adjustable), and trimmers as needed to vary field size. In this study, a computational model and experimental validation of the Theratron T-1000 are presented. Model validation is based upon in-field commissioning data collected at the University of Florida, published out-of-field data from the British Journal of Radiology (BJR) Supplement 25, and out-of-field measurements performed at the University of Wisconsin’s Accredited Dosimetry Calibration Laboratory (UWADCL). The computational model of the Theratron T-1000 agrees with central axis percentage depth dose data to within 2% for 6 × 6 to 30 × 30 cm2 fields. Out-of-field doses were found to vary between 0.6% to 2.4% of central axis dose at 10 cm from field edge and 0.42% to 0.97% of central axis dose at 20 cm from the field edge, all at 5 cm depth. Absolute and relative differences between computed and measured out-of-field doses varied between ±2.5% and ±100%, respectively, at distances up to 60 cm from the central axis. The source-term model was subsequently combined with patient-morphometry matched computational hybrid phantoms as a method for estimating in-field and out-of-field organ doses for patients treated for Hodgkin’s Lymphoma. By changing field size and position, and adding patient-specific field shaping blocks, more complex historical treatment set-ups can be to recreated, particularly those
Monte carlo analysis of multicolour LED light engine
DEFF Research Database (Denmark)
Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen
2015-01-01
A new Monte Carlo simulation as a tool for analysing colour feedback systems is presented here to analyse the colour uncertainties and achievable stability in a multicolour dynamic LED system. The Monte Carlo analysis presented here is based on an experimental investigation of a multicolour LED...
New Approaches and Applications for Monte Carlo Perturbation Theory
Energy Technology Data Exchange (ETDEWEB)
Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan; Leppänen, Jaakko; Palmiotti, Giuseppe; Salvatores, Massimo; Sen, Sonat; Shwageraus, Eugene; Fratoni, Massimiliano
2017-02-01
This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.
A keff calculation method by Monte Carlo
International Nuclear Information System (INIS)
Shen, H; Wang, K.
2008-01-01
The effective multiplication factor (k eff ) is defined as the ratio between the number of neutrons in successive generations, which definition is adopted by most Monte Carlo codes (e.g. MCNP). Also, it can be thought of as the ratio of the generation rate of neutrons by the sum of the leakage rate and the absorption rate, which should exclude the effect of the neutron reaction such as (n, 2n) and (n, 3n). This article discusses the Monte Carlo method for k eff calculation based on the second definition. A new code has been developed and the results are presented. (author)
Adiabatic optimization versus diffusion Monte Carlo methods
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
Shell model the Monte Carlo way
International Nuclear Information System (INIS)
Ormand, W.E.
1995-01-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined
Shell model the Monte Carlo way
Energy Technology Data Exchange (ETDEWEB)
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Off-diagonal expansion quantum Monte Carlo.
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Off-diagonal expansion quantum Monte Carlo
Albash, Tameem; Wagenbreth, Gene; Hen, Itay
2017-12-01
We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.
Sikora, M; Dohm, O; Alber, M
2007-08-07
A dedicated, efficient Monte Carlo (MC) accelerator head model for intensity modulated stereotactic radiosurgery treatment planning is needed to afford a highly accurate simulation of tiny IMRT fields. A virtual source model (VSM) of a mini multi-leaf collimator (MLC) (the Elekta Beam Modulator (EBM)) is presented, allowing efficient generation of particles even for small fields. The VSM of the EBM is based on a previously published virtual photon energy fluence model (VEF) (Fippel et al 2003 Med. Phys. 30 301) commissioned with large field measurements in air and in water. The original commissioning procedure of the VEF, based on large field measurements only, leads to inaccuracies for small fields. In order to improve the VSM, it was necessary to change the VEF model by developing (1) a method to determine the primary photon source diameter, relevant for output factor calculations, (2) a model of the influence of the flattening filter on the secondary photon spectrum and (3) a more realistic primary photon spectrum. The VSM model is used to generate the source phase space data above the mini-MLC. Later the particles are transmitted through the mini-MLC by a passive filter function which significantly speeds up the time of generation of the phase space data after the mini-MLC, used for calculation of the dose distribution in the patient. The improved VSM model was commissioned for 6 and 15 MV beams. The results of MC simulation are in very good agreement with measurements. Less than 2% of local difference between the MC simulation and the diamond detector measurement of the output factors in water was achieved. The X, Y and Z profiles measured in water with an ion chamber (V = 0.125 cm(3)) and a diamond detector were used to validate the models. An overall agreement of 2%/2 mm for high dose regions and 3%/2 mm in low dose regions between measurement and MC simulation for field sizes from 0.8 x 0.8 cm(2) to 16 x 21 cm(2) was achieved. An IMRT plan film verification
Energy Technology Data Exchange (ETDEWEB)
Wan Chan Tseung, H; Ma, J; Beltran, C [Mayo Clinic, Rochester, MN (United States)
2014-06-15
Purpose: To build a GPU-based Monte Carlo (MC) simulation of proton transport with detailed modeling of elastic and non-elastic (NE) protonnucleus interactions, for use in a very fast and cost-effective proton therapy treatment plan verification system. Methods: Using the CUDA framework, we implemented kernels for the following tasks: (1) Simulation of beam spots from our possible scanning nozzle configurations, (2) Proton propagation through CT geometry, taking into account nuclear elastic and multiple scattering, as well as energy straggling, (3) Bertini-style modeling of the intranuclear cascade stage of NE interactions, and (4) Simulation of nuclear evaporation. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions with therapeutically-relevant nuclei, (2) Pencil-beam dose calculations in homogeneous phantoms, (3) A large number of treatment plan dose recalculations, and compared with Geant4.9.6p2/TOPAS. A workflow was devised for calculating plans from a commercially available treatment planning system, with scripts for reading DICOM files and generating inputs for our MC. Results: Yields, energy and angular distributions of secondaries from NE collisions on various nuclei are in good agreement with the Geant4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%–2mm for 70–230 MeV pencil-beam dose distributions in water, soft tissue, bone and Ti phantoms is 100%. The pass rate at 2%–2mm for treatment plan calculations is typically above 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is around 20s for 1×10{sup 7} proton histories. Conclusion: Our GPU-based proton transport MC is the first of its kind to include a detailed nuclear model to handle NE interactions on any nucleus. Dosimetric calculations demonstrate very good agreement with Geant4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil
Acceleration of the MCNP branch of the OCTOPUS depletion code system
International Nuclear Information System (INIS)
Pijlgroms, B.J.; Hogenbirk, A.; Oppe, J.
1998-09-01
OCTOPUS depletion calculations using the 3D Monte Carlo spectrum code MCNP (Monte Carlo Code for Neutron and Photon Transport) require much computing time. In a former implementation, the time required by OCTOPUS to perform multi-zone calculations, increased roughly proportional to the number of burnable zones. By using a different method the situation has improved considerably. In the new implementation described here, the dependence of the computing time on the number of zones has been moved from the MCNP code to a faster postprocessing code. By this, the overall computing time will reduce substantially. 11 refs
International Nuclear Information System (INIS)
Talley, T.L.; Evans, F.
1988-01-01
Prior work demonstrated the importance of nuclear scattering to fusion product energy deposition in hot plasmas. This suggests careful examination of nuclear physics details in burning plasma simulations. An existing Monte Carlo fast ion transport code is being expanded to be a test bed for this examination. An initial extension, the energy deposition of fast alpha particles in a hot deuterium plasma, is reported. The deposition times and deposition ranges are modified by allowing nuclear scattering. Up to 10% of the initial alpha particle energy is carried to greater ranges and times by the more mobile recoil deuterons. 4 refs., 5 figs., 2 tabs
ALEPH: An optimal approach to Monte Carlo burn-up
International Nuclear Information System (INIS)
Verboomen, B.
2007-01-01
The incentive of creating Monte Carlo burn-up codes arises from its ability to provide the most accurate locally dependent spectra and flux values in realistic 3D geometries of any type. These capabilities linked with the ability to handle nuclear data not only in its most basic but also most complex form (namely continuous energy cross sections, detailed energy-angle correlations, multi-particle physics, etc.) could make Monte Carlo burn-up codes very powerful, especially for hybrid and advanced nuclear systems (like for instance Accelerator Driven Systems). Still, such Monte Carlo burn-up codes have had limited success mainly due to the rather long CPU time required to carry out very detailed and accurate calculations, even with modern computer technology. To work around this issue, users often have to reduce the number of nuclides in the evolution chains or to consider either longer irradiation time steps and/or larger spatial burn-up cells, jeopardizing the accuracy of the calculation in all cases. There should always be a balance between accuracy and what is (reasonably) achievable. So when the Monte Carlo simulation time is as low as possible and if calculating the cross sections and flux values required for the depletion calculation takes little or no extra time compared to this simulation time, then we can actually be as accurate as we want. That is the optimum situation for Monte Carlo burn-up calculations.The ultimate goal of this work is to provide the Monte Carlo community with an efficient, flexible and easy to use alternative for Monte Carlo burn-up and activation calculations, which is what we did with ALEPH. ALEPH is a Monte Carlo burn-up code that uses ORIGEN 2.2 as a depletion module and any version of MCNP or MCNPX as the transport module. For now, ALEPH has been limited to updating microscopic cross section data only. By providing an easy to understand user interface, we also take away the burden from the user. For the user, it is as if he is
Comparative Analysis of VERA Depletion Problems
International Nuclear Information System (INIS)
Park, Jinsu; Kim, Wonkyeong; Choi, Sooyoung; Lee, Hyunsuk; Lee, Deokjung
2016-01-01
Each code has its own solver for depletion, which can produce different depletion calculation results. In order to produce reference solutions for depletion calculation comparison, sensitivity studies should be preceded for each depletion solver. The sensitivity tests for burnup interval, number of depletion zones, and recoverable energy per fission (Q-value) were performed in this paper. For the comparison of depletion calculation results, usually the multiplication factors are compared as a function of burnup. In this study, new comparison methods have been introduced by using the number density of isotope or element, and a cumulative flux instead of burnup. In this paper, optimum depletion calculation options are determined through the sensitivity study of the burnup intervals and the number of depletion intrazones. Because the depletion using CRAM solver performs well for large burnup intervals, smaller number of burnup steps can be used to produce converged solutions. It was noted that the depletion intra-zone sensitivity is only pin-type dependent. The 1 and 10 depletion intra-zones for the normal UO2 pin and gadolinia rod, respectively, are required to obtain the reference solutions. When the optimized depletion calculation options are used, the differences of Q-values are found to be a main cause of the differences of solutions. In this paper, new comparison methods were introduced for consistent code-to-code comparisons even when different kappa libraries were used in the depletion calculations
Comparative Analysis of VERA Depletion Problems
Energy Technology Data Exchange (ETDEWEB)
Park, Jinsu; Kim, Wonkyeong; Choi, Sooyoung; Lee, Hyunsuk; Lee, Deokjung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)
2016-10-15
Each code has its own solver for depletion, which can produce different depletion calculation results. In order to produce reference solutions for depletion calculation comparison, sensitivity studies should be preceded for each depletion solver. The sensitivity tests for burnup interval, number of depletion zones, and recoverable energy per fission (Q-value) were performed in this paper. For the comparison of depletion calculation results, usually the multiplication factors are compared as a function of burnup. In this study, new comparison methods have been introduced by using the number density of isotope or element, and a cumulative flux instead of burnup. In this paper, optimum depletion calculation options are determined through the sensitivity study of the burnup intervals and the number of depletion intrazones. Because the depletion using CRAM solver performs well for large burnup intervals, smaller number of burnup steps can be used to produce converged solutions. It was noted that the depletion intra-zone sensitivity is only pin-type dependent. The 1 and 10 depletion intra-zones for the normal UO2 pin and gadolinia rod, respectively, are required to obtain the reference solutions. When the optimized depletion calculation options are used, the differences of Q-values are found to be a main cause of the differences of solutions. In this paper, new comparison methods were introduced for consistent code-to-code comparisons even when different kappa libraries were used in the depletion calculations.
Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George
2014-01-01
Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified
Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X George
2014-07-01
Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified Woodcock tracking algorithm
Energy Technology Data Exchange (ETDEWEB)
Barrera, C A; Moran, M J
2007-08-21
The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS
International Nuclear Information System (INIS)
Chetty, Indrin J.; Curran, Bruce; Cygler, Joanna E.; DeMarco, John J.; Ezzell, Gary; Faddegon, Bruce A.; Kawrakow, Iwan; Keall, Paul J.; Liu, Helen; Ma, C.-M. Charlie; Rogers, D. W. O.; Seuntjens, Jan; Sheikh-Bagheri, Daryoush; Siebers, Jeffrey V.
2007-01-01
The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and
Coudin, D; Marinello, G
1998-03-01
Recently, new backscatter factors for low-energy x rays derived from Monte Carlo calculations have been recommended in the UK code of practice for kilovoltage dosimetry published by (IPEMB). As these data, presented as a function of half-value layer, do not take account of the variation of the x-ray spectra for a given HVL, we have undertaken an experimental study in order to determine BSG for the beam qualities provided by a Darpac 2000 therapy unit. A RTL detector such as Li2B4O7:Cu and parallel-plate ion chambers specially designed for low-energy x-ray dosimetry have been used. The results obtained show very good agreement between the TLD and the Monte Carlo calculations, confirming values obtained by other authors with lithium borate TLD. On the contrary, the results obtained with plane-parallel ion chambers show discrepancies up to 9% that are discussed.
International Nuclear Information System (INIS)
Nagaya, Yasunobu; Okumura, Keisuke; Sakurai, Takeshi; Mori, Takamasa
2017-03-01
In order to realize fast and accurate Monte Carlo simulation of neutron and photon transport problems, two Monte Carlo codes MVP (continuous-energy method) and GMVP (multigroup method) have been developed at Japan Atomic Energy Agency. The codes have adopted a vectorized algorithm and have been developed for vector-type supercomputers. They also support parallel processing with a standard parallelization library MPI and thus a speed-up of Monte Carlo calculations can be achieved on general computing platforms. The first and second versions of the codes were released in 1994 and 2005, respectively. They have been extensively improved and new capabilities have been implemented. The major improvements and new capabilities are as follows: (1) perturbation calculation for effective multiplication factor, (2) exact resonant elastic scattering model, (3) calculation of reactor kinetics parameters, (4) photo-nuclear model, (5) simulation of delayed neutrons, (6) generation of group constants. This report describes the physical model, geometry description method used in the codes, new capabilities and input instructions. (author)
Monte Carlo simulation of experiments
International Nuclear Information System (INIS)
Opat, G.I.
1977-07-01
An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)
Energy Technology Data Exchange (ETDEWEB)
Ureba, A.; Palma, B. A.; Leal, A.
2011-07-01
Develop a more efficient method of optimization in relation to time, based on linear programming designed to implement a multi objective penalty function which also permits a simultaneous solution integrated boost situations considering two white volumes simultaneously.
Energy Technology Data Exchange (ETDEWEB)
Arreola V, G. [IPN, Escuela Superior de Fisica y Matematicas, Posgrado en Ciencias Fisicomatematicas, area en Ingenieria Nuclear, Unidad Profesional Adolfo Lopez Mateos, Edificio 9, Col. San Pedro Zacatenco, 07730 Mexico D. F. (Mexico); Vazquez R, R.; Guzman A, J. R., E-mail: energia.arreola.uam@gmail.com [Universidad Autonoma Metropolitana, Unidad Iztapalapa, Area de Ingenieria en Recursos Energeticos, Av. San Rafael Atlixco 186, Col. Vicentina, 09340 Mexico D. F. (Mexico)
2012-10-15
In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., {mu}{omicron}=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)
GPT-Free Sensitivity Analysis for Reactor Depletion and Analysis
Kennedy, Christopher Brandon
model (ROM) error. When building a subspace using the GPT-Free approach, the reduction error can be selected based on an error tolerance for generic flux response-integrals. The GPT-Free approach then solves the fundamental adjoint equation with randomly generated sets of input parameters. Using properties from linear algebra, the fundamental k-eigenvalue sensitivities, spanned by the various randomly generated models, can be related to response sensitivity profiles by a change of basis. These sensitivity profiles are the first-order derivatives of responses to input parameters. The quality of the basis is evaluated using the kappa-metric, developed from Wilks' order statistics, on the user-defined response functionals that involve the flux state-space. Because the kappa-metric is formed from Wilks' order statistics, a probability-confidence interval can be established around the reduction error based on user-defined responses such as fuel-flux, max-flux error, or other generic inner products requiring the flux. In general, The GPT-Free approach will produce a ROM with a quantifiable, user-specified reduction error. This dissertation demonstrates the GPT-Free approach for steady state and depletion reactor calculations modeled by SCALE6, an analysis tool developed by Oak Ridge National Laboratory. Future work includes the development of GPT-Free for new Monte Carlo methods where the fundamental adjoint is available. Additionally, the approach in this dissertation examines only the first derivatives of responses, the response sensitivity profile; extension and/or generalization of the GPT-Free approach to higher order response sensitivity profiles is natural area for future research.
Uranium, depleted uranium, biological effects
International Nuclear Information System (INIS)
2001-01-01
Physicists, chemists and biologists at the CEA are developing scientific programs on the properties and uses of ionizing radiation. Since the CEA was created in 1945, a great deal of research has been carried out on the properties of natural, enriched and depleted uranium in cooperation with university laboratories and CNRS. There is a great deal of available data about uranium; thousands of analyses have been published in international reviews over more than 40 years. This presentation on uranium is a very brief summary of all these studies. (author)
Ebert, David D; Berking, Matthias; Thiart, Hanne; Riper, Heleen; Laferton, Johannes A C; Cuijpers, Pim; Sieland, Bernhard; Lehr, Dirk
2015-12-01
This randomized controlled trial evaluated the efficacy of an Internet-based intervention, which aimed to improve recovery from work-related strain in teachers with sleeping problems and work-related rumination. In addition, mechanisms of change were also investigated. A sample of 128 teachers with elevated symptoms of insomnia (Insomnia Severity Index [ISI] ≥ 15) and work-related rumination (Cognitive Irritation Scale ≥ 15) was assigned to either an Internet-based recovery training (intervention condition [IC]) or to a waitlist control condition (CC). The IC consisted of 6 Internet-based sessions that aimed to promote healthy restorative behavior. Self-report data were assessed at baseline and again after 8 weeks. Additionally, a sleep diary was used starting 1 week before baseline and ending 1 week after postassessment. The primary outcome was insomnia severity. Secondary outcomes included perseverative cognitions (i.e., work-related rumination and worrying), a range of recovery measures and depression. An extended 6-month follow-up was assessed in the IC only. A serial multiple mediator analysis was carried out to investigate mechanisms of change. IC participants displayed a significantly greater reduction in insomnia severity (d = 1.37, 95% confidence interval: 0.99-1.77) than did participants of the CC. The IC was also superior with regard to changes in all investigated secondary outcomes. Effects were maintained until a naturalistic 6-month follow-up. Effects on insomnia severity were mediated by both a reduction in perseverative cognitions and sleep effort. Additionally, a greater increase in number of recovery activities per week was found to be associated with lower perseverative cognitions that in turn led to a greater reduction in insomnia severity. This study provides evidence for the efficacy of an unguided, Internet-based occupational recovery training and provided first evidence for a number of assumed mechanisms of change. (PsycINFO Database
Monte Carlo Methods in Physics
International Nuclear Information System (INIS)
Santoso, B.
1997-01-01
Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained
Iwata, Hiromitsu; Ishikura, Satoshi; Murai, Taro; Iwabuchi, Michio; Inoue, Mitsuhiro; Tatewaki, Koshi; Ohta, Seiji; Yokota, Naoki; Shibamoto, Yuta
2017-08-01
In this phase I/II study, we assessed the safety and initial efficacy of stereotactic body radiotherapy (SBRT) for lung tumors with real-time tumor tracking using CyberKnife based on the Monte Carlo algorithm. Study subjects had histologically confirmed primary non-small-cell lung cancer staged as T1a-T2aN0M0 and pulmonary oligometastasis. The primary endpoint was the incidence of Grade ≥3 radiation pneumonitis (RP) within 180 days of the start of SBRT. The secondary endpoint was local control and overall survival rates. Five patients were initially enrolled at level 1 [50 Gy/4 fractions (Fr)]; during the observation period, level 0 (45 Gy/4 Fr) was opened. The dose was escalated to the next level when grade ≥3 RP was observed in 0 out of 5 or 1 out of 10 patients. Virtual quality assurance planning was performed for 60 Gy/4 Fr; however, dose constraints for the organs at risk did not appear to be within acceptable ranges. Therefore, level 2 (55 Gy/4 Fr) was regarded as the upper limit. After the recommended dose (RD) was established, 15 additional patients were enrolled at the RD. The prescribed dose was normalized at the 95% volume border of the planning target volume based on the Monte Carlo algorithm. Between September 2011 and September 2015, 40 patients (primary 30; metastasis 10) were enrolled. Five patients were enrolled at level 0, 15 at level 1, and 20 at level 2. Only one grade 3 RP was observed at level 1. Two-year local control and overall survival rates were 98 and 81%, respectively. The RD was 55 Gy/4 Fr. SBRT with real-time tumor tracking using CyberKnife based on the Monte Carlo algorithm was tolerated well and appeared to be effective for solitary lung tumors.
Rodriguez, M.; Brualla, L.
2018-04-01
Monte Carlo simulation of radiation transport is computationally demanding to obtain reasonably low statistical uncertainties of the estimated quantities. Therefore, it can benefit in a large extent from high-performance computing. This work is aimed at assessing the performance of the first generation of the many-integrated core architecture (MIC) Xeon Phi coprocessor with respect to that of a CPU consisting of a double 12-core Xeon processor in Monte Carlo simulation of coupled electron-photonshowers. The comparison was made twofold, first, through a suite of basic tests including parallel versions of the random number generators Mersenne Twister and a modified implementation of RANECU. These tests were addressed to establish a baseline comparison between both devices. Secondly, through the p DPM code developed in this work. p DPM is a parallel version of the Dose Planning Method (DPM) program for fast Monte Carlo simulation of radiation transport in voxelized geometries. A variety of techniques addressed to obtain a large scalability on the Xeon Phi were implemented in p DPM. Maximum scalabilities of 84 . 2 × and 107 . 5 × were obtained in the Xeon Phi for simulations of electron and photon beams, respectively. Nevertheless, in none of the tests involving radiation transport the Xeon Phi performed better than the CPU. The disadvantage of the Xeon Phi with respect to the CPU owes to the low performance of the single core of the former. A single core of the Xeon Phi was more than 10 times less efficient than a single core of the CPU for all radiation transport simulations.
Coded aperture optimization using Monte Carlo simulations
International Nuclear Information System (INIS)
Martineau, A.; Rocchisani, J.M.; Moretti, J.L.
2010-01-01
Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection ma