Validation of a Monte Carlo Based Depletion Methodology Using HFIR Post-Irradiation Measurements
Energy Technology Data Exchange (ETDEWEB)
Chandler, David [ORNL; Maldonado, G Ivan [ORNL; Primm, Trent [ORNL
2009-11-01
Post-irradiation uranium isotopic atomic densities within the core of the High Flux Isotope Reactor (HFIR) were calculated and compared to uranium mass spectrographic data measured in the late 1960s and early 70s [1]. This study was performed in order to validate a Monte Carlo based depletion methodology for calculating the burn-up dependent nuclide inventory, specifically the post-irradiation uranium
Tippayakul, Chanatip
The main objective of this research is to develop a practical fuel management system for the Pennsylvania State University Breazeale research reactor (PSBR) based on several advanced Monte Carlo coupled depletion methodologies. Primarily, this research involved two major activities: model and method developments and analyses and validations of the developed models and methods. The starting point of this research was the utilization of the earlier developed fuel management tool, TRIGSIM, to create the Monte Carlo model of core loading 51 (end of the core loading). It was found when comparing the normalized power results of the Monte Carlo model to those of the current fuel management system (using HELIOS/ADMARC-H) that they agreed reasonably well (within 2%--3% differences on average). Moreover, the reactivity of some fuel elements was calculated by the Monte Carlo model and it was compared with measured data. It was also found that the fuel element reactivity results of the Monte Carlo model were in good agreement with the measured data. However, the subsequent task of analyzing the conversion from the core loading 51 to the core loading 52 using TRIGSIM showed quite significant difference of each control rod worth between the Monte Carlo model and the current methodology model. The differences were mainly caused by inconsistent absorber atomic number densities between the two models. Hence, the model of the first operating core (core loading 2) was revised in light of new information about the absorber atomic densities to validate the Monte Carlo model with the measured data. With the revised Monte Carlo model, the results agreed better to the measured data. Although TRIGSIM showed good modeling and capabilities, the accuracy of TRIGSIM could be further improved by adopting more advanced algorithms. Therefore, TRIGSIM was planned to be upgraded. The first task of upgrading TRIGSIM involved the improvement of the temperature modeling capability. The new TRIGSIM was
Development of Monte Carlo depletion code MCDEP
Energy Technology Data Exchange (ETDEWEB)
Kim, K. S.; Kim, K. Y.; Lee, J. C.; Ji, S. K. [KAERI, Taejon (Korea, Republic of)
2003-07-01
Monte Carlo neutron transport calculation has been used to obtain a reference solution in reactor physics analysis. The typical and widely-used Monte Carlo transport code is MCNP (Monte Carlo N-Particle Transport Code) developed in Los Alamos National Laboratory. The drawbacks of Monte-Carlo transport codes are the lacks of the capacities for the depletion and temperature dependent calculations. In this research we developed MCDEP (Monte Carlo Depletion Code Package) using MCNP with the capacity of the depletion calculation. This code package is the integration of MCNP and depletion module of ORIGEN-2 using the matrix exponential method. This code package enables the automatic MCNP and depletion calculations only with the initial MCNP and MCDEP inputs prepared by users. Depletion chains were simplified for the efficiency of computing time and the treatment of short-lived nuclides without cross section data. The results of MCDEP showed that the reactivity and pin power distributions for the PWR fuel pins and assemblies are consistent with those of CASMO-3 and HELIOS.
MCOR - Monte Carlo depletion code for reference LWR calculations
Energy Technology Data Exchange (ETDEWEB)
Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)
2011-04-15
Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally
Monte carlo depletion analysis of SMART core by MCNAP code
Energy Technology Data Exchange (ETDEWEB)
Jung, Jong Sung; Sim, Hyung Jin; Kim, Chang Hyo [Seoul National Univ., Seoul (Korea, Republic of); Lee, Jung Chan; Ji, Sung Kyun [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)
2001-05-01
Depletion an analysis of SMART, a small-sized advanced integral PWR under development by KAERI, is conducted using the Monte Carlo (MC) depletion analysis program, MCNAP. The results are compared with those of the CASMO-3/ MASTER nuclear analysis. The difference between MASTER and MCNAP on k{sub eff} prediction is observed about 600pcm at BOC, and becomes smaller as the core burnup increases. The maximum difference bet ween two predict ions on fuel assembly (FA) normalized power distribution is about 6.6% radially , and 14.5% axially but the differences are observed to lie within standard deviation of MC estimations.
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Fensin, Michael Lorne
and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.
Institute of Scientific and Technical Information of China (English)
XIAO Chang-Ming; GUO Ji-Yuan; HU Ping
2006-01-01
@@ According to the acceptance ratio method, the influences on the depletion interactions between a large sphere and a plate from another closely placed large sphere are studied by Monte Carlo simulation. The numerical results show that both the depletion potential and depletion force are affected by the presence of the closely placed large sphere; the closer the large sphere are placed to them, the larger the influence will be. Furthermore, the influences on the depletion interactions from another large sphere are more sensitive to the angle than to the distance.
Progress on burnup calculation methods coupling Monte Carlo and depletion codes
Energy Technology Data Exchange (ETDEWEB)
Leszczynski, Francisco [Comision Nacional de Energia Atomica, San Carlos de Bariloche, RN (Argentina). Centro Atomico Bariloche]. E-mail: lesinki@cab.cnea.gob.ar
2005-07-01
Several methods of burnup calculations coupling Monte Carlo and depletion codes that were investigated and applied for the author last years are described. here. Some benchmark results and future possibilities are analyzed also. The methods are: depletion calculations at cell level with WIMS or other cell codes, and use of the resulting concentrations of fission products, poisons and actinides on Monte Carlo calculation for fixed burnup distributions obtained from diffusion codes; same as the first but using a method o coupling Monte Carlo (MCNP) and a depletion code (ORIGEN) at a cell level for obtaining the concentrations of nuclides, to be used on full reactor calculation with Monte Carlo code; and full calculation of the system with Monte Carlo and depletion codes, on several steps. All these methods were used for different problems for research reactors and some comparisons with experimental results of regular lattices were performed. On this work, a resume of all these works is presented and discussion of advantages and problems found are included. Also, a brief description of the methods adopted and MCQ system for coupling MCNP and ORIGEN codes is included. (author)
ORPHEE research reactor: 3D core depletion calculation using Monte-Carlo code TRIPOLI-4®
Damian, F.; Brun, E.
2014-06-01
ORPHEE is a research reactor located at CEA Saclay. It aims at producing neutron beams for experiments. This is a pool-type reactor (heavy water), and the core is cooled by light water. Its thermal power is 14 MW. ORPHEE core is 90 cm height and has a cross section of 27x27 cm2. It is loaded with eight fuel assemblies characterized by a various number of fuel plates. The fuel plate is composed of aluminium and High Enriched Uranium (HEU). It is a once through core with a fuel cycle length of approximately 100 Equivalent Full Power Days (EFPD) and with a maximum burnup of 40%. Various analyses under progress at CEA concern the determination of the core neutronic parameters during irradiation. Taking into consideration the geometrical complexity of the core and the quasi absence of thermal feedback for nominal operation, the 3D core depletion calculations are performed using the Monte-Carlo code TRIPOLI-4® [1,2,3]. A preliminary validation of the depletion calculation was performed on a 2D core configuration by comparison with the deterministic transport code APOLLO2 [4]. The analysis showed the reliability of TRIPOLI-4® to calculate a complex core configuration using a large number of depleting regions with a high level of confidence.
Qin, Jianguo; Lai, Caifeng; Liu, Rong; Zhu, Tonghua; Zhang, Xinwei; Ye, Bangjiao
2015-01-01
To overcome the problem of inefficient computing time and unreliable results in MCNP5 calculation, a two-step method is adopted to calculate the energy deposition of prompt gamma-rays in detectors for depleted uranium spherical shells under D-T neutrons irradiation. In the first step, the gamma-ray spectrum for energy below 7 MeV is calculated by MCNP5 code; secondly, the electron recoil spectrum in a BC501A liquid scintillator detector is simulated based on EGSnrc Monte Carlo Code with the g...
Qin, Jianguo; Liu, Rong; Zhu, Tonghua; Zhang, Xinwei; Ye, Bangjiao
2015-01-01
To overcome the problem of inefficient computing time and unreliable results in MCNP5 calculation, a two-step method is adopted to calculate the energy deposition of prompt gamma-rays in detectors for depleted uranium spherical shells under D-T neutrons irradiation. In the first step, the gamma-ray spectrum for energy below 7 MeV is calculated by MCNP5 code; secondly, the electron recoil spectrum in a BC501A liquid scintillator detector is simulated based on EGSnrc Monte Carlo Code with the gamma-ray spectrum from the first step as input. The comparison of calculated results with experimental ones shows that the simulations agree well with experiment in the energy region 0.4-3 MeV for the prompt gamma-ray spectrum and below 4 MeVee for the electron recoil spectrum. The reliability of the two-step method in this work is validated.
Accelerated GPU based SPECT Monte Carlo simulations
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
CERN Summer Student Report 2016 Monte Carlo Data Base Improvement
Caciulescu, Alexandru Razvan
2016-01-01
During my Summer Student project I worked on improving the Monte Carlo Data Base and MonALISA services for the ALICE Collaboration. The project included learning the infrastructure for tracking and monitoring of the Monte Carlo productions as well as developing a new RESTful API for seamless integration with the JIRA issue tracking framework.
Jian-Guo, Qin; Cai-Feng, Lai; Rong, Liu; Tong-Hua, Zhu; Xin-Wei, Zhang; Bang-Jiao, Ye
2016-03-01
To overcome the problem of inefficient computing time and unreliable results in MCNP5 calculation, a two-step method is adopted to calculate the energy deposition of prompt γ-rays in detectors for depleted uranium spherical shells under D-T neutron irradiation. In the first step, the γ-ray spectrum for energy below 7 MeV is calculated by MCNP5 code; secondly, the electron recoil spectrum in a BC501A liquid scintillator detector is simulated based on EGSnrc Monte Carlo Code with the γ-ray spectrum from the first step as input. The comparison of calculated results with experimental ones shows that the simulations agree well with experiment in the energy region 0.4-3 MeV for the prompt γ-ray spectrum and below 4 MeVee for the electron recoil spectrum. The reliability of the two-step method in this work is validated. Supported by the National Natural Science Foundation of China (91226104) and National Special Magnetic Confinement Fusion Energy Research, China (2015GB108001)
Rundel, R. D.; Butler, D. M.; Stolarski, R. S.
1978-01-01
The paper discusses the development of a concise stratospheric model which uses iteration to obtain coupling between interacting species. The one-dimensional, steady-state, diurnally-averaged model generates diffusion equations with appropriate sources and sinks for species odd oxygen, H2O, H2, CO, N2O, odd nitrogen, CH4, CH3Cl, CCl4, CF2Cl2, CFCl3, and odd chlorine. The model evaluates steady-state perturbations caused by injections of chlorine and NO(x) and may be used to predict ozone depletion. The model is used in a Monte Carlo study of the propagation of reaction-rate imprecisions by calculating an ozone perturbation caused by the addition of chlorine. Since the model is sensitive to only 10 of the more than 50 reaction rates considered, only about 1000 Monte Carlo cases are required to span the space of possible results.
Depleting methyl bromide residues in soil by reaction with bases
Despite generally being considered the most effective soil fumigant, methyl bromide (MeBr) use is being phased out because its emissions from soil can lead to stratospheric ozone depletion. However, a large amount is still currently used due to Critical Use Exemptions. As strategies for reducing the...
DEFF Research Database (Denmark)
Stenbæk, D S; Einarsdottir, H S; Goregliad-Fjaellingsdal, T;
2016-01-01
Acute Tryptophan Depletion (ATD) is a dietary method used to modulate central 5-HT to study the effects of temporarily reduced 5-HT synthesis. The aim of this study is to evaluate a novel method of ATD using a gelatin-based collagen peptide (CP) mixture. We administered CP-Trp or CP+Trp mixtures ...... effects of CP-Trp compared to CP+Trp were observed. The transient increase in plasma Trp after CP+Trp may impair comparison to the CP-Trp and we therefore recommend in future studies to use a smaller dose of Trp supplement to the CP mixture....
Satellite-based estimates of groundwater depletion in India
Rodell, M.; Velicogna, I; Famiglietti, JS
2009-01-01
Groundwater is a primary source of fresh water in many parts of the world. Some regions are becoming overly dependent on it, consuming groundwater faster than it is naturally replenished and causing water tables to decline unremittingly. Indirect evidence suggests that this is the case in northwest India, but there has been no regional assessment of the rate of groundwater depletion. Here we use terrestrial water storage-change observations from the NASA Gravity Recovery and Climate Experimen...
Clinical dosimetry in photon radiotherapy. A Monte Carlo based investigation
International Nuclear Information System (INIS)
Practical clinical dosimetry is a fundamental step within the radiation therapy process and aims at quantifying the absorbed radiation dose within a 1-2% uncertainty. To achieve this level of accuracy, corrections are needed for calibrated and air-filled ionization chambers, which are used for dose measurement. The procedures of correction are based on cavity theory of Spencer-Attix and are defined in current dosimetry protocols. Energy dependent corrections for deviations from calibration beams account for changed ionization chamber response in the treatment beam. The corrections applied are usually based on semi-analytical models or measurements and are generally hard to determine due to their magnitude of only a few percents or even less. Furthermore the corrections are defined for fixed geometrical reference-conditions and do not apply to non-reference conditions in modern radiotherapy applications. The stochastic Monte Carlo method for the simulation of radiation transport is becoming a valuable tool in the field of Medical Physics. As a suitable tool for calculation of these corrections with high accuracy the simulations enable the investigation of ionization chambers under various conditions. The aim of this work is the consistent investigation of ionization chamber dosimetry in photon radiation therapy with the use of Monte Carlo methods. Nowadays Monte Carlo systems exist, which enable the accurate calculation of ionization chamber response in principle. Still, their bare use for studies of this type is limited due to the long calculation times needed for a meaningful result with a small statistical uncertainty, inherent to every result of a Monte Carlo simulation. Besides heavy use of computer hardware, techniques methods of variance reduction to reduce the needed calculation time can be applied. Methods for increasing the efficiency in the results of simulation were developed and incorporated in a modern and established Monte Carlo simulation environment
Quantitative Monte Carlo-based holmium-166 SPECT reconstruction
Energy Technology Data Exchange (ETDEWEB)
Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de [Department of Radiology and Nuclear Medicine, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands); Viergever, Max A. [Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands)
2013-11-15
Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ≥17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80
GPU-Monte Carlo based fast IMRT plan optimization
Directory of Open Access Journals (Sweden)
Yongbao Li
2014-03-01
Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z
GPU based Monte Carlo for PET image reconstruction: parameter optimization
International Nuclear Information System (INIS)
This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)
The New MCNP6 Depletion Capability
Energy Technology Data Exchange (ETDEWEB)
Fensin, Michael Lorne [Los Alamos National Laboratory; James, Michael R. [Los Alamos National Laboratory; Hendricks, John S. [Los Alamos National Laboratory; Goorley, John T. [Los Alamos National Laboratory
2012-06-19
The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.
Image based Monte Carlo Modeling for Computational Phantom
Cheng, Mengyun; Wang, Wen; Zhao, Kai; Fan, Yanchang; Long, Pengcheng; Wu, Yican
2014-06-01
The evaluation on the effects of ionizing radiation and the risk of radiation exposure on human body has been becoming one of the most important issues for radiation protection and radiotherapy fields, which is helpful to avoid unnecessary radiation and decrease harm to human body. In order to accurately evaluate the dose on human body, it is necessary to construct more realistic computational phantom. However, manual description and verfication of the models for Monte carlo(MC)simulation are very tedious, error-prone and time-consuming. In addiation, it is difficult to locate and fix the geometry error, and difficult to describe material information and assign it to cells. MCAM (CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport Simulation) was developed as an interface program to achieve both CAD- and image-based automatic modeling by FDS Team (Advanced Nuclear Energy Research Team, http://www.fds.org.cn). The advanced version (Version 6) of MCAM can achieve automatic conversion from CT/segmented sectioned images to computational phantoms such as MCNP models. Imaged-based automatic modeling program(MCAM6.0) has been tested by several medical images and sectioned images. And it has been applied in the construction of Rad-HUMAN. Following manual segmentation and 3D reconstruction, a whole-body computational phantom of Chinese adult female called Rad-HUMAN was created by using MCAM6.0 from sectioned images of a Chinese visible human dataset. Rad-HUMAN contains 46 organs/tissues, which faithfully represented the average anatomical characteristics of the Chinese female. The dose conversion coefficients(Dt/Ka) from kerma free-in-air to absorbed dose of Rad-HUMAN were calculated. Rad-HUMAN can be applied to predict and evaluate dose distributions in the Treatment Plan System (TPS), as well as radiation exposure for human body in radiation protection.
Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh
2014-06-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.
Monte Carlo-based simulation of dynamic jaws tomotherapy
International Nuclear Information System (INIS)
Purpose: Original TomoTherapy systems may involve a trade-off between conformity and treatment speed, the user being limited to three slice widths (1.0, 2.5, and 5.0 cm). This could be overcome by allowing the jaws to define arbitrary fields, including very small slice widths (<1 cm), which are challenging for a beam model. The aim of this work was to incorporate the dynamic jaws feature into a Monte Carlo (MC) model called TomoPen, based on the MC code PENELOPE, previously validated for the original TomoTherapy system. Methods: To keep the general structure of TomoPen and its efficiency, the simulation strategy introduces several techniques: (1) weight modifiers to account for any jaw settings using only the 5 cm phase-space file; (2) a simplified MC based model called FastStatic to compute the modifiers faster than pure MC; (3) actual simulation of dynamic jaws. Weight modifiers computed with both FastStatic and pure MC were compared. Dynamic jaws simulations were compared with the convolution/superposition (C/S) of TomoTherapy in the ''cheese'' phantom for a plan with two targets longitudinally separated by a gap of 3 cm. Optimization was performed in two modes: asymmetric jaws-constant couch speed (''running start stop,'' RSS) and symmetric jaws-variable couch speed (''symmetric running start stop,'' SRSS). Measurements with EDR2 films were also performed for RSS for the formal validation of TomoPen with dynamic jaws. Results: Weight modifiers computed with FastStatic were equivalent to pure MC within statistical uncertainties (0.5% for three standard deviations). Excellent agreement was achieved between TomoPen and C/S for both asymmetric jaw opening/constant couch speed and symmetric jaw opening/variable couch speed, with deviations well within 2%/2 mm. For RSS procedure, agreement between C/S and measurements was within 2%/2 mm for 95% of the points and 3%/3 mm for 98% of the points, where dose is greater than 30% of the prescription dose (gamma analysis
MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks
Directory of Open Access Journals (Sweden)
Zhaoyan Jin
2013-10-01
Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works
An empirical formula based on Monte Carlo simulation for diffuse reflectance from turbid media
Gnanatheepam, Einstein; Aruna, Prakasa Rao; Ganesan, Singaravelu
2016-03-01
Diffuse reflectance spectroscopy has been widely used in diagnostic oncology and characterization of laser irradiated tissue. However, still accurate and simple analytical equation does not exist for estimation of diffuse reflectance from turbid media. In this work, a diffuse reflectance lookup table for a range of tissue optical properties was generated using Monte Carlo simulation. Based on the generated Monte Carlo lookup table, an empirical formula for diffuse reflectance was developed using surface fitting method. The variance between the Monte Carlo lookup table surface and the surface obtained from the proposed empirical formula is less than 1%. The proposed empirical formula may be used for modeling of diffuse reflectance from tissue.
Low loss and high speed silicon optical modulator based on a lateral carrier depletion structure.
Marris-Morini, Delphine; Vivien, Laurent; Fédéli, Jean Marc; Cassan, Eric; Lyan, Philippe; Laval, Suzanne
2008-01-01
A high speed and low loss silicon optical modulator based on carrier depletion has been made using an original structure consisting of a p-doped slit embedded in the intrinsic region of a lateral pin diode. This design allows a good overlap between the optical mode and carrier density variations. Insertion loss of 5 dB has been measured with a contrast ratio of 14 dB for a 3 dB bandwidth of 10 GHz. PMID:18521165
Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code
He, Tongming Tony
In IMRT inverse planning, inaccurate dose calculations and limitations in optimization algorithms introduce both systematic and convergence errors to treatment plans. The goal of this work is to practically implement a Monte Carlo based inverse planning model for clinical IMRT. The intention is to minimize both types of error in inverse planning and obtain treatment plans with better clinical accuracy than non-Monte Carlo based systems. The strategy is to calculate the dose matrices of small beamlets by using a Monte Carlo based method. Optimization of beamlet intensities is followed based on the calculated dose data using an optimization algorithm that is capable of escape from local minima and prevents possible pre-mature convergence. The MCNP 4B Monte Carlo code is improved to perform fast particle transport and dose tallying in lattice cells by adopting a selective transport and tallying algorithm. Efficient dose matrix calculation for small beamlets is made possible by adopting a scheme that allows concurrent calculation of multiple beamlets of single port. A finite-sized point source (FSPS) beam model is introduced for easy and accurate beam modeling. A DVH based objective function and a parallel platform based algorithm are developed for the optimization of intensities. The calculation accuracy of improved MCNP code and FSPS beam model is validated by dose measurements in phantoms. Agreements better than 1.5% or 0.2 cm have been achieved. Applications of the implemented model to clinical cases of brain, head/neck, lung, spine, pancreas and prostate have demonstrated the feasibility and capability of Monte Carlo based inverse planning for clinical IMRT. Dose distributions of selected treatment plans from a commercial non-Monte Carlo based system are evaluated in comparison with Monte Carlo based calculations. Systematic errors of up to 12% in tumor doses and up to 17% in critical structure doses have been observed. The clinical importance of Monte Carlo based
Application of Photon Transport Monte Carlo Module with GPU-based Parallel System
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Je [Sejong University, Seoul (Korea, Republic of); Shon, Heejeong [Golden Eng. Co. LTD, Seoul (Korea, Republic of); Lee, Donghak [CoCo Link Inc., Seoul (Korea, Republic of)
2015-05-15
In general, it takes lots of computing time to get reliable results in Monte Carlo simulations especially in deep penetration problems with a thick shielding medium. To mitigate such a weakness of Monte Carlo methods, lots of variance reduction algorithms are proposed including geometry splitting and Russian roulette, weight windows, exponential transform, and forced collision, etc. Simultaneously, advanced computing hardware systems such as GPU(Graphics Processing Units)-based parallel machines are used to get a better performance of the Monte Carlo simulation. The GPU is much easier to access and to manage when comparing a CPU cluster system. It also becomes less expensive these days due to enhanced computer technology. There, lots of engineering areas adapt GPU-bases massive parallel computation technique. based photon transport Monte Carlo method. It provides almost 30 times speedup without any optimization and it is expected almost 200 times with fully supported GPU system. It is expected that GPU system with advanced parallelization algorithm will contribute successfully for development of the Monte Carlo module which requires quick and accurate simulations.
Application of Photon Transport Monte Carlo Module with GPU-based Parallel System
International Nuclear Information System (INIS)
In general, it takes lots of computing time to get reliable results in Monte Carlo simulations especially in deep penetration problems with a thick shielding medium. To mitigate such a weakness of Monte Carlo methods, lots of variance reduction algorithms are proposed including geometry splitting and Russian roulette, weight windows, exponential transform, and forced collision, etc. Simultaneously, advanced computing hardware systems such as GPU(Graphics Processing Units)-based parallel machines are used to get a better performance of the Monte Carlo simulation. The GPU is much easier to access and to manage when comparing a CPU cluster system. It also becomes less expensive these days due to enhanced computer technology. There, lots of engineering areas adapt GPU-bases massive parallel computation technique. based photon transport Monte Carlo method. It provides almost 30 times speedup without any optimization and it is expected almost 200 times with fully supported GPU system. It is expected that GPU system with advanced parallelization algorithm will contribute successfully for development of the Monte Carlo module which requires quick and accurate simulations
A Monte-Carlo-Based Network Method for Source Positioning in Bioluminescence Tomography
Zhun Xu; Xiaolei Song; Xiaomeng Zhang; Jing Bai
2007-01-01
We present an approach based on the improved Levenberg Marquardt (LM) algorithm of backpropagation (BP) neural network to estimate the light source position in bioluminescent imaging. For solving the forward problem, the table-based random sampling algorithm (TBRS), a fast Monte Carlo simulation method ...
International Nuclear Information System (INIS)
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations
Energy Technology Data Exchange (ETDEWEB)
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
Monte-Carlo based prediction of radiochromic film response for hadrontherapy dosimetry
International Nuclear Information System (INIS)
A model has been developed to calculate MD-55-V2 radiochromic film response to ion irradiation. This model is based on photon film response and film saturation by high local energy deposition computed by Monte-Carlo simulation. We have studied the response of the film to photon irradiation and we proposed a calculation method for hadron beams.
Shape based Monte Carlo code for light transport in complex heterogeneous tissues
Margallo-Balbás, E.; French, P.J.
2007-01-01
A Monte Carlo code for the calculation of light transport in heterogeneous scattering media is presented together with its validation. Triangle meshes are used to define the interfaces between different materials, in contrast with techniques based on individual volume elements. This approach allows
MOx benchmark calculations by deterministic and Monte Carlo codes
International Nuclear Information System (INIS)
Highlights: ► MOx based depletion calculation. ► Methodology to create continuous energy pseudo cross section for lump of minor fission products. ► Mass inventory comparison between deterministic and Monte Carlo codes. ► Higher deviation was found for several isotopes. - Abstract: A depletion calculation benchmark devoted to MOx fuel is an ongoing objective of the OECD/NEA WPRS following the study of depletion calculation concerning UOx fuels. The objective of the proposed benchmark is to compare existing depletion calculations obtained with various codes and data libraries applied to fuel and back-end cycle configurations. In the present work the deterministic code NEWT/ORIGEN-S of the SCALE6 codes package and the Monte Carlo based code MONTEBURNS2.0 were used to calculate the masses of inventory isotopes. The methodology to apply the MONTEBURNS2.0 to this benchmark is also presented. Then the results from both code were compared.
A lattice-based Monte Carlo evaluation of Canada Deuterium Uranium-6 safety parameters
Energy Technology Data Exchange (ETDEWEB)
Kim, Yong Hee; Hartanto, Donny; Kim, Woo Song [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of)
2016-06-15
Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANada Deuterium Uranium (CANDU-6) reactor have been evaluated using the Monte Carlo method. For accurate analysis of the parameters, the Doppler broadening rejection correction scheme was implemented in the MCNPX code to account for the thermal motion of the heavy uranium-238 nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted using MCNPX. The FTC value is evaluated for several burnup points including the mid-burnup representing a near-equilibrium core. The Doppler effect has been evaluated using several cross-section libraries such as ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1.1, and JENDL-4.0. The PCR value is also evaluated at mid-burnup conditions to characterize the safety features of an equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, we considered a huge number of neutron histories in this work and the standard deviation of the k-infinity values is only 0.5-1 pcm.
Jeraj, Robert; Keall, Paul
2000-12-01
The effect of the statistical uncertainty, or noise, in inverse treatment planning for intensity modulated radiotherapy (IMRT) based on Monte Carlo dose calculation was studied. Sets of Monte Carlo beamlets were calculated to give uncertainties at Dmax ranging from 0.2% to 4% for a lung tumour plan. The weights of these beamlets were optimized using a previously described procedure based on a simulated annealing optimization algorithm. Several different objective functions were used. It was determined that the use of Monte Carlo dose calculation in inverse treatment planning introduces two errors in the calculated plan. In addition to the statistical error due to the statistical uncertainty of the Monte Carlo calculation, a noise convergence error also appears. For the statistical error it was determined that apparently successfully optimized plans with a noisy dose calculation (3% 1σ at Dmax ), which satisfied the required uniformity of the dose within the tumour, showed as much as 7% underdose when recalculated with a noise-free dose calculation. The statistical error is larger towards the tumour and is only weakly dependent on the choice of objective function. The noise convergence error appears because the optimum weights are determined using a noisy calculation, which is different from the optimum weights determined for a noise-free calculation. Unlike the statistical error, the noise convergence error is generally larger outside the tumour, is case dependent and strongly depends on the required objectives.
Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT
Weinmann Martin; Söhn Matthias; Muzik Jan; Sikora Marcin; Alber Markus
2009-01-01
Abstract Background The purpose of the present study is to compare finite size pencil beam (fsPB) and Monte Carlo (MC) based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT). Materials and methods A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, ...
Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT
Sikora, Marcin; Muzik, Jan; Söhn, Matthias; Weinmann, Martin; Alber, Markus
2009-01-01
Background The purpose of the present study is to compare finite size pencil beam (fsPB) and Monte Carlo (MC) based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT). Materials and methods A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, density o...
Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT
Sikora, Marcin Pawel; Muzik, Jan; Söhn, Matthias; Weinmann, Martin; Alber, Markus
2009-01-01
Background: The purpose of the present study is to compare finite size pencil beam (fsPB) and Monte Carlo (MC) based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT). Materials and methods: A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase ...
Development of the point-depletion code DEPTH
International Nuclear Information System (INIS)
Highlights: ► The DEPTH code has been developed for the large-scale depletion system. ► DEPTH uses the data library which is convenient to couple with MC codes. ► TTA and matrix exponential methods are implemented and compared. ► DEPTH is able to calculate integral quantities based on the matrix inverse. ► Code-to-code comparisons prove the accuracy and efficiency of DEPTH. -- Abstract: The burnup analysis is an important aspect in reactor physics, which is generally done by coupling of transport calculations and point-depletion calculations. DEPTH is a newly-developed point-depletion code of handling large burnup depletion systems and detailed depletion chains. For better coupling with Monte Carlo transport codes, DEPTH uses data libraries based on the combination of ORIGEN-2 and ORIGEN-S and allows users to assign problem-dependent libraries for each depletion step. DEPTH implements various algorithms of treating the stiff depletion systems, including the Transmutation trajectory analysis (TTA), the Chebyshev Rational Approximation Method (CRAM), the Quadrature-based Rational Approximation Method (QRAM) and the Laguerre Polynomial Approximation Method (LPAM). Three different modes are supported by DEPTH to execute the decay, constant flux and constant power calculations. In addition to obtaining the instantaneous quantities of the radioactivity, decay heats and reaction rates, DEPTH is able to calculate the integral quantities by a time-integrated solver. Through calculations compared with ORIGEN-2, the validity of DEPTH in point-depletion calculations is proved. The accuracy and efficiency of depletion algorithms are also discussed. In addition, an actual pin-cell burnup case is calculated to illustrate the DEPTH code performance in coupling with the RMC Monte Carlo code
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Chen, Chaobin; Huang, Qunying; Wu, Yican
2005-04-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of x-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
Chen Chaobin; Huang Qunying; Wu Yican
2005-01-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
Valence-dependent influence of serotonin depletion on model-based choice strategy.
Worbe, Y; Palminteri, S; Savulich, G; Daw, N D; Fernandez-Egea, E; Robbins, T W; Voon, V
2016-05-01
Human decision-making arises from both reflective and reflexive mechanisms, which underpin goal-directed and habitual behavioural control. Computationally, these two systems of behavioural control have been described by different learning algorithms, model-based and model-free learning, respectively. Here, we investigated the effect of diminished serotonin (5-hydroxytryptamine) neurotransmission using dietary tryptophan depletion (TD) in healthy volunteers on the performance of a two-stage decision-making task, which allows discrimination between model-free and model-based behavioural strategies. A novel version of the task was used, which not only examined choice balance for monetary reward but also for punishment (monetary loss). TD impaired goal-directed (model-based) behaviour in the reward condition, but promoted it under punishment. This effect on appetitive and aversive goal-directed behaviour is likely mediated by alteration of the average reward representation produced by TD, which is consistent with previous studies. Overall, the major implication of this study is that serotonin differentially affects goal-directed learning as a function of affective valence. These findings are relevant for a further understanding of psychiatric disorders associated with breakdown of goal-directed behavioural control such as obsessive-compulsive disorders or addictions. PMID:25869808
Simulation model based on Monte Carlo method for traffic assignment in local area road network
Institute of Scientific and Technical Information of China (English)
Yuchuan DU; Yuanjing GENG; Lijun SUN
2009-01-01
For a local area road network, the available traffic data of traveling are the flow volumes in the key intersections, not the complete OD matrix. Considering the circumstance characteristic and the data availability of a local area road network, a new model for traffic assignment based on Monte Carlo simulation of intersection turning movement is provided in this paper. For good stability in temporal sequence, turning ratio is adopted as the important parameter of this model. The formulation for local area road network assignment problems is proposed on the assumption of random turning behavior. The traffic assignment model based on the Monte Carlo method has been used in traffic analysis for an actual urban road network. The results comparing surveying traffic flow data and determining flow data by the previous model verify the applicability and validity of the proposed methodology.
Laser-based detection and tracking moving objects using data-driven Markov chain Monte Carlo
Vu, Trung-Dung; Aycard, Olivier
2009-01-01
We present a method of simultaneous detection and tracking moving objects from a moving vehicle equipped with a single layer laser scanner. A model-based approach is introduced to interpret the laser measurement sequence by hypotheses of moving object trajectories over a sliding window of time. Knowledge of various aspects including object model, measurement model, motion model are integrated in one theoretically sound Bayesian framework. The data-driven Markov chain Monte Carlo (DDMCMC) tech...
Hu, Xingzhi; Chen, Xiaoqian; Parks, Geoffrey T.; Yao, Wen
2016-10-01
Ever-increasing demands of uncertainty-based design, analysis, and optimization in aerospace vehicles motivate the development of Monte Carlo methods with wide adaptability and high accuracy. This paper presents a comprehensive review of typical improved Monte Carlo methods and summarizes their characteristics to aid the uncertainty-based multidisciplinary design optimization (UMDO). Among them, Bayesian inference aims to tackle the problems with the availability of prior information like measurement data. Importance sampling (IS) settles the inconvenient sampling and difficult propagation through the incorporation of an intermediate importance distribution or sequential distributions. Optimized Latin hypercube sampling (OLHS) is a stratified sampling approach to achieving better space-filling and non-collapsing characteristics. Meta-modeling approximation based on Monte Carlo saves the computational cost by using cheap meta-models for the output response. All the reviewed methods are illustrated by corresponding aerospace applications, which are compared to show their techniques and usefulness in UMDO, thus providing a beneficial reference for future theoretical and applied research.
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes
Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...
ERSN-OpenMC, a Java-based GUI for OpenMC Monte Carlo code
Directory of Open Access Journals (Sweden)
Jaafar EL Bakkali
2016-07-01
Full Text Available OpenMC is a new Monte Carlo transport particle simulation code focused on solving two types of neutronic problems mainly the k-eigenvalue criticality fission source problems and external fixed fission source problems. OpenMC does not have any Graphical User Interface and the creation of one is provided by our java-based application named ERSN-OpenMC. The main feature of this application is to provide to the users an easy-to-use and flexible graphical interface to build better and faster simulations, with less effort and great reliability. Additionally, this graphical tool was developed with several features, as the ability to automate the building process of OpenMC code and related libraries as well as the users are given the freedom to customize their installation of this Monte Carlo code. A full description of the ERSN-OpenMC application is presented in this paper.
Pair correlations in iron-based superconductors: Quantum Monte Carlo study
Energy Technology Data Exchange (ETDEWEB)
Kashurnikov, V.A.; Krasavin, A.V., E-mail: avkrasavin@gmail.com
2014-08-01
The new generalized quantum continuous time world line Monte Carlo algorithm was developed to calculate pair correlation functions for two-dimensional FeAs-clusters modeling of iron-based superconductors using a two-orbital model. The data obtained for clusters with sizes up to 10×10 FeAs-cells favor the possibility of an effective charge carrier's attraction that is corresponding the A{sub 1g}-symmetry, at some parameters of interaction. The analysis of pair correlations depending on the cluster size, temperature, interaction, and the type of symmetry of the order parameter is carried out. - Highlights: • New generalized quantum continuous time world line Monte Carlo algorithm is developed. • Pair correlation functions for two-dimensional FeAs-clusters are calculated. • Parameters of two-orbital model corresponding to attraction of carriers are defined.
Electron density of states of Fe-based superconductors: Quantum trajectory Monte Carlo method
Kashurnikov, V. A.; Krasavin, A. V.; Zhumagulov, Ya. V.
2016-03-01
The spectral and total electron densities of states in two-dimensional FeAs clusters, which simulate iron-based superconductors, have been calculated using the generalized quantum Monte Carlo algorithm within the full two-orbital model. Spectra have been reconstructed by solving the integral equation relating the Matsubara Green's function and spectral density by the method combining the gradient descent and Monte Carlo algorithms. The calculations have been performed for clusters with dimensions up to 10 × 10 FeAs cells. The profiles of the Fermi surface for the entire Brillouin zone have been presented in the quasiparticle approximation. Data for the total density of states near the Fermi level have been obtained. The effect of the interaction parameter, size of the cluster, and temperature on the spectrum of excitations has been studied.
Polarization imaging of multiply-scattered radiation based on integral-vector Monte Carlo method
International Nuclear Information System (INIS)
A new integral-vector Monte Carlo method (IVMCM) is developed to analyze the transfer of polarized radiation in 3D multiple scattering particle-laden media. The method is based on a 'successive order of scattering series' expression of the integral formulation of the vector radiative transfer equation (VRTE) for application of efficient statistical tools to improve convergence of Monte Carlo calculations of integrals. After validation against reference results in plane-parallel layer backscattering configurations, the model is applied to a cubic container filled with uniformly distributed monodispersed particles and irradiated by a monochromatic narrow collimated beam. 2D lateral images of effective Mueller matrix elements are calculated in the case of spherical and fractal aggregate particles. Detailed analysis of multiple scattering regimes, which are very similar for unpolarized radiation transfer, allows identifying the sensitivity of polarization imaging to size and morphology.
GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method
Wei, J.; Kruis, F. E.
2013-09-01
Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.
Directory of Open Access Journals (Sweden)
Anuradha Banerjee
2013-09-01
Full Text Available A mobile ad hoc network is an infrastructure less network, where nodes are free to move independently in any direction. The nodes have limited battery power; hence we require efficient balancing techniques (energy depletion or expected residual lifetime, whichever is applicable under specific circumstances to reduce overload on the nodes, wherever possible, to enhance their lifetime and network performance. This kind of balance among network nodes increase the average lifetime of nodes and reduce the phenomenon of network partitioning due to excessive exhaustion of nodes. In this paper, we propose an alternative-node based balancing method (ANB that channels the forwarding load of a node to some other less exhausted alternative node provided that alternative node is capable of handling the extra load. This greatly reduces the number of link breakages and also the number of route-requests flooded in the network to repair the broken links. This, in turn, improves the data packet delivery ratio of the underlying routing protocol as well as average node lifetime.
Espel, Federico Puente
The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods
Comparing analytical and Monte Carlo optical diffusion models in phosphor-based X-ray detectors
Kalyvas, N.; Liaparinos, P.
2014-03-01
Luminescent materials are employed as radiation to light converters in detectors of medical imaging systems, often referred to as phosphor screens. Several processes affect the light transfer properties of phosphors. Amongst the most important is the interaction of light. Light attenuation (absorption and scattering) can be described either through "diffusion" theory in theoretical models or "quantum" theory in Monte Carlo methods. Although analytical methods, based on photon diffusion equations, have been preferentially employed to investigate optical diffusion in the past, Monte Carlo simulation models can overcome several of the analytical modelling assumptions. The present study aimed to compare both methodologies and investigate the dependence of the analytical model optical parameters as a function of particle size. It was found that the optical photon attenuation coefficients calculated by analytical modeling are decreased with respect to the particle size (in the region 1- 12 μm). In addition, for particles sizes smaller than 6μm there is no simultaneous agreement between the theoretical modulation transfer function and light escape values with respect to the Monte Carlo data.
Energy Technology Data Exchange (ETDEWEB)
Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi
1996-03-01
The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).
State-of-the-art in Comprehensive Cascade Control Approach through Monte-Carlo Based Representation
Directory of Open Access Journals (Sweden)
A.H. Mazinan
2015-10-01
Full Text Available The research relies on the comprehensive cascade control approach to be developed in the area of spacecraft, as long as Monte-Carlo based representation is taken into real consideration with respect to state-of-the-art. It is obvious that the conventional methods do not have sufficient merit to be able to deal with such a process under control, constantly, provided that a number of system parameters variations are to be used in providing real situations. It is to note that the new insights in the area of the research’s topic are valuable to outperform a class of spacecrafts performance as the realizations of the acquired results are to be addressed in both real and academic environments. In a word, there are a combination of double closed loop based upon quaternion based control approach in connection with Euler based control approach to handle the three-axis rotational angles and its rates, synchronously, in association with pulse modulation analysis and control allocation, where the dynamics and kinematics of the present system under control are analyzed. A series of experiments are carried out to consider the approach performance in which the aforementioned Monte-Carlo based representation is to be realized in verifying the investigated outcomes.
Pair correlation functions of FeAs-based superconductors: Quantum Monte Carlo study
Kashurnikov, V. A.; Krasavin, A. V.
2015-01-01
The new generalized quantum continuous time world line Monte Carlo algorithm was developed to calculate pair correlation functions for two-dimensional FeAs-clusters modeling of iron-based superconductors within the framework of the two-orbital model. The analysis of pair correlations depending on the cluster size, temperature, interaction, and the type of symmetry of the order parameter is carried out. The data obtained for clusters with sizes up to 1 0x1 0 FeAs-cells favor the possibility of an effective charge carrier's attraction that is corresponding the A1g-symmetry, at some parameters of interaction.
Monte Carlo tests of the Rasch model based on scalability coefficients
DEFF Research Database (Denmark)
Christensen, Karl Bang; Kreiner, Svend
2010-01-01
For item responses fitting the Rasch model, the assumptions underlying the Mokken model of double monotonicity are met. This makes non-parametric item response theory a natural starting-point for Rasch item analysis. This paper studies scalability coefficients based on Loevinger's H coefficient...... that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence...
International Nuclear Information System (INIS)
Channel capacity of ocean water is limited by propagation distance and optical properties. Previous studies on this problem are based on water-tank experiments with different amounts of Maalox antacid. However, propagation distance is limited by the experimental set-up and the optical properties are different from ocean water. Therefore, the experiment result is not accurate for the physical design of underwater wireless communications links. This letter developed a Monte Carlo model to study channel capacity of underwater optical communications. Moreover, this model can flexibly configure various parameters of transmitter, receiver and channel, and is suitable for physical underwater optical communications links design. (paper)
A new Monte-Carlo based simulation for the CryoEDM experiment
Raso-Barnett, Matthew
2015-01-01
This thesis presents a new Monte-Carlo based simulation of the physics of ultra-cold neutrons (UCN) in complex geometries and its application to the CryoEDM experiment. It includes a detailed description of the design and performance of this simulation along with its use in a project to study the magnetic depolarisation time of UCN within the apparatus due to magnetic impurities in the measurement cell, which is a crucial parameter in the sensitivity of a neutron electricdipole-moment (nEDM) ...
PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation
Energy Technology Data Exchange (ETDEWEB)
Espana, S; Herraiz, J L; Vicente, E; Udias, J M [Grupo de Fisica Nuclear, Departmento de Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid, Madrid (Spain); Vaquero, J J; Desco, M [Unidad de Medicina y CirugIa Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2009-03-21
Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
Townson, Reid; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-01-01
A novel phase-space source implementation has been designed for GPU-based Monte Carlo dose calculation engines. Due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel strategy to pre-process patient-independent phase-spaces and bin particles by type, energy and position. Position bins l...
Research on Reliability Modelling Method of Machining Center Based on Monte Carlo Simulation
Directory of Open Access Journals (Sweden)
Chuanhai Chen
2013-03-01
Full Text Available The aim of this study is to get the reliability of series system and analyze the reliability of machining center. So a modified method of reliability modelling based on Monte Carlo simulation for series system is proposed. The reliability function, which is built by the classical statistics method based on the assumption that machine tools were repaired as good as new, may be biased in the real case. The reliability functions of subsystems are established respectively and then the reliability model is built according to the reliability block diagram. Then the fitting reliability function of machine tools is established using the failure data of sample generated by Monte Carlo simulation, whose inverse reliability function is solved by the linearization technique based on radial basis function. Finally, an example of the machining center is presented using the proposed method to show its potential application. The analysis results show that the proposed method can provide an accurate reliability model compared with the conventional method.
Application of backtracking algorithm to depletion calculations
International Nuclear Information System (INIS)
Based on the theory of linear chain method for analytical depletion calculations, the burn-up matrix is decoupled by the divide and conquer strategy and the linear chain with Markov characteristic is formed. The density, activity and decay heat of every nuclide in the chain can be calculated by analytical solutions. Every possible reaction path of the nuclide must be considered during the linear chain establishment process. To confirm the calculation precision and efficiency, the algorithm which can cover all the reaction paths of the nuclide and search the paths automatically according to to problem description and precision restrictions should be sought. Through analysis and comparison of several kinds of searching algorithms, the backtracking algorithm was selected to search and calculate the linear chains using Depth First Search (DFS) method. The depletion program can solve the depletion problem adaptively and with high fidelity. The solution space and time complexity of the program were analyzed. The new developed depletion program was coupled with Monte Carlo program MCMG-II to calculate the benchmark burn-up problem of the first core of China Experimental Fast Reactor (CEFR). The initial verification and validation of the program was performed by the calculation. (author)
Energy Technology Data Exchange (ETDEWEB)
Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.
2013-07-01
The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)
Comment on "A study on tetrahedron-based inhomogeneous Monte-Carlo optical simulation".
Fang, Qianqian
2011-01-01
The Monte Carlo (MC) method is a popular approach to modeling photon propagation inside general turbid media, such as human tissue. Progress had been made in the past year with the independent proposals of two mesh-based Monte Carlo methods employing ray-tracing techniques. Both methods have shown improvements in accuracy and efficiency in modeling complex domains. A recent paper by Shen and Wang [Biomed. Opt. Express 2, 44 (2011)] reported preliminary results towards the cross-validation of the two mesh-based MC algorithms and software implementations, showing a 3-6 fold speed difference between the two software packages. In this comment, we share our views on unbiased software comparisons and discuss additional issues such as the use of pre-computed data, interpolation strategies, impact of compiler settings, use of Russian roulette, memory cost and potential pitfalls in measuring algorithm performance. Despite key differences between the two algorithms in handling of non-tetrahedral meshes, we found that they share similar structure and performance for tetrahedral meshes. A significant fraction of the observed speed differences in the mentioned article was the result of inconsistent use of compilers and libraries. PMID:21559136
Comment on "A study on tetrahedron-based inhomogeneous Monte-Carlo optical simulation".
Fang, Qianqian
2011-04-19
The Monte Carlo (MC) method is a popular approach to modeling photon propagation inside general turbid media, such as human tissue. Progress had been made in the past year with the independent proposals of two mesh-based Monte Carlo methods employing ray-tracing techniques. Both methods have shown improvements in accuracy and efficiency in modeling complex domains. A recent paper by Shen and Wang [Biomed. Opt. Express 2, 44 (2011)] reported preliminary results towards the cross-validation of the two mesh-based MC algorithms and software implementations, showing a 3-6 fold speed difference between the two software packages. In this comment, we share our views on unbiased software comparisons and discuss additional issues such as the use of pre-computed data, interpolation strategies, impact of compiler settings, use of Russian roulette, memory cost and potential pitfalls in measuring algorithm performance. Despite key differences between the two algorithms in handling of non-tetrahedral meshes, we found that they share similar structure and performance for tetrahedral meshes. A significant fraction of the observed speed differences in the mentioned article was the result of inconsistent use of compilers and libraries.
Lin, N.-H.; Saxena, V. K.
1992-01-01
The physical characteristics of the Antarctic stratospheric aerosol are investigated via a comprehensive analysis of the SAGE II data during the most severe ozone depletion episode of October 1987. The aerosol size distribution is found to be bimodal in several instances using the randomized minimization search technique, which suggests that the distribution of a single mode may be used to fit the data in the retrieved size range only at the expense of resolution for the larger particles. On average, in the region below 18 km, a wavelike perturbation with the upstream tilting for the parameters of mass loading, total number, and surface area concentration is found to be located just above the region of the most severe ozone depletion.
Pedestrian counting with grid-based binary sensors based on Monte Carlo method
Fujii, Shuto; Taniguchi, Yoshiaki; Hasegawa, Go; Matsuoka, Morito
2014-01-01
Abstract In this paper, we propose a method for estimating the number of pedestrians walking in opposite directions, as in cases of a shopping street or a sidewalk in a downtown area. The proposed method utilizes a compound-eye sensor that is constructed by placing two binary sensors for the pedestrians’ movement direction and multiple binary sensors for the vertical direction of the pedestrians’ movement direction. A number of Monte Carlo simulations about the movement of pedestrians are con...
Inverse treatment planning for radiation therapy based on fast Monte Carlo dose calculation
International Nuclear Information System (INIS)
An inverse treatment planning system based on fast Monte Carlo (MC) dose calculation is presented. It allows optimisation of intensity modulated dose distributions in 15 to 60 minutes on present day personal computers. If a multi-processor machine is available, parallel simulation of particle histories is also possible, leading to further calculation time reductions. The optimisation process is divided into two stages. The first stage results influence profiles based on pencil beam (PB) dose calculation. The second stage starts with MC verification and post-optimisation of the PB dose and fluence distributions. Because of the potential to accurately model beam modifiers, MC based inverse planning systems are able to optimise compensator thicknesses and leaf trajectories instead of intensity profiles only. The corresponding techniques, whose implementation is the subject for future work, are also presented here. (orig.)
GPU-based high performance Monte Carlo simulation in neutron transport
Energy Technology Data Exchange (ETDEWEB)
Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br
2009-07-01
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)
IMPROVED ALGORITHM FOR ROAD REGION SEGMENTATION BASED ON SEQUENTIAL MONTE-CARLO ESTIMATION
Directory of Open Access Journals (Sweden)
Zdenek Prochazka
2014-12-01
Full Text Available In recent years, many researchers and car makers put a lot of intensive effort into development of autonomous driving systems. Since visual information is the main modality used by human driver, a camera mounted on moving platform is very important kind of sensor, and various computer vision algorithms to handle vehicle surrounding situation are under intensive research. Our final goal is to develop a vision based lane detection system with ability to handle various types of road shapes, working on both structured and unstructured roads, ideally under presence of shadows. This paper presents a modified road region segmentation algorithm based on sequential Monte-Carlo estimation. Detailed description of the algorithm is given, and evaluation results show that the proposed algorithm outperforms the segmentation algorithm developed as a part of our previous work, as well as an conventional algorithm based on colour histogram.
Adjoint-based uncertainty quantification and sensitivity analysis for reactor depletion calculations
Stripling, Hayes Franklin
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
Monte Carlo calculation based on hydrogen composition of the tissue for MV photon radiotherapy.
Demol, Benjamin; Viard, Romain; Reynaert, Nick
2015-01-01
The purpose of this study was to demonstrate that Monte Carlo treatment planning systems require tissue characterization (density and composition) as a function of CT number. A discrete set of tissue classes with a specific composition is introduced. In the current work we demonstrate that, for megavoltage photon radiotherapy, only the hydrogen content of the different tissues is of interest. This conclusion might have an impact on MRI-based dose calculations and on MVCT calibration using tissue substitutes. A stoichiometric calibration was performed, grouping tissues with similar atomic composition into 15 dosimetrically equivalent subsets. To demonstrate the importance of hydrogen, a new scheme was derived, with correct hydrogen content, complemented by oxygen (all elements differing from hydrogen are replaced by oxygen). Mass attenuation coefficients and mass stopping powers for this scheme were calculated and compared to the original scheme. Twenty-five CyberKnife treatment plans were recalculated by an in-house developed Monte Carlo system using tissue density and hydrogen content derived from the CT images. The results were compared to Monte Carlo simulations using the original stoichiometric calibration. Between 300 keV and 3 MeV, the relative difference of mass attenuation coefficients is under 1% within all subsets. Between 10 keV and 20 MeV, the relative difference of mass stopping powers goes up to 5% in hard bone and remains below 2% for all other tissue subsets. Dose-volume histograms (DVHs) of the treatment plans present no visual difference between the two schemes. Relative differences of dose indexes D98, D95, D50, D05, D02, and Dmean were analyzed and a distribution centered around zero and of standard deviation below 2% (3 σ) was established. On the other hand, once the hydrogen content is slightly modified, important dose differences are obtained. Monte Carlo dose planning in the field of megavoltage photon radiotherapy is fully achievable using
Energy Technology Data Exchange (ETDEWEB)
Zhu Feng [State Key Laboratory for Physical Chemistry of Solid Surfaces and Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, Fujian 361005 (China); Yan Jiawei, E-mail: jwyan@xmu.edu.cn [State Key Laboratory for Physical Chemistry of Solid Surfaces and Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, Fujian 361005 (China); Lu Miao [Pen-Tung Sah Micro-Nano Technology Research Center, Xiamen University, Xiamen, Fujian 361005 (China); Zhou Yongliang; Yang Yang; Mao Bingwei [State Key Laboratory for Physical Chemistry of Solid Surfaces and Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, Fujian 361005 (China)
2011-10-01
Highlights: > A novel strategy based on a combination of interferent depleting and redox cycling is proposed for the plane-recessed microdisk array electrodes. > The strategy break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction. > The electrodes enhance the current signal by redox cycling. > The electrodes can work regardless of the reversibility of interfering species. - Abstract: The fabrication, characterization and application of the plane-recessed microdisk array electrodes for selective detection are demonstrated. The electrodes, fabricated by lithographic microfabrication technology, are composed of a planar film electrode and a 32 x 32 recessed microdisk array electrode. Different from commonly used redox cycling operating mode for array configurations such as interdigitated array electrodes, a novel strategy based on a combination of interferent depleting and redox cycling is proposed for the electrodes with an appropriate configuration. The planar film electrode (the plane electrode) is used to deplete the interferent in the diffusion layer. The recessed microdisk array electrode (the microdisk array), locating within the diffusion layer of the plane electrode, works for detecting the target analyte in the interferent-depleted diffusion layer. In addition, the microdisk array overcomes the disadvantage of low current signal for a single microelectrode. Moreover, the current signal of the target analyte that undergoes reversible electron transfer can be enhanced due to the redox cycling between the plane electrode and the microdisk array. Based on the above working principle, the plane-recessed microdisk array electrodes break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction, which is a limitation of single redox cycling operating mode. The advantages of the
Fission yield calculation using toy model based on Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
Fission yield calculation using toy model based on Monte Carlo simulation
International Nuclear Information System (INIS)
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (Rc), mean of left curve (μL) and mean of right curve (μR), deviation of left curve (σL) and deviation of right curve (σR). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
International Nuclear Information System (INIS)
After an accidental release of radionuclides to the inhabited environment the external gamma irradiation from deposited radioactivity contributes significantly to the radiation exposure of the population for extended periods. For evaluating this exposure pathway, three main model requirements are needed: (i) to calculate the air kerma value per photon emitted per unit source area, based on Monte Carlo (MC) simulations; (ii) to describe the distribution and dynamics of radionuclides on the diverse urban surfaces; and (iii) to combine all these elements in a relevant urban model to calculate the resulting doses according to the actual scenario. This paper provides an overview about the different approaches to calculate photon transport in urban areas and about several dose calculation codes published. Two types of Monte Carlo simulations are presented using the global and the local approaches of photon transport. Moreover, two different philosophies of the dose calculation, the 'location factor method' and a combination of relative contamination of surfaces with air kerma values are described. The main features of six codes (ECOSYS, EDEM2M, EXPURT, PARATI, TEMAS, URGENT) are highlighted together with a short model-model features intercomparison
A Strategy for Finding the Optimal Scale of Plant Core Collection Based on Monte Carlo Simulation
Directory of Open Access Journals (Sweden)
Jiancheng Wang
2014-01-01
Full Text Available Core collection is an ideal resource for genome-wide association studies (GWAS. A subcore collection is a subset of a core collection. A strategy was proposed for finding the optimal sampling percentage on plant subcore collection based on Monte Carlo simulation. A cotton germplasm group of 168 accessions with 20 quantitative traits was used to construct subcore collections. Mixed linear model approach was used to eliminate environment effect and GE (genotype × environment effect. Least distance stepwise sampling (LDSS method combining 6 commonly used genetic distances and unweighted pair-group average (UPGMA cluster method was adopted to construct subcore collections. Homogeneous population assessing method was adopted to assess the validity of 7 evaluating parameters of subcore collection. Monte Carlo simulation was conducted on the sampling percentage, the number of traits, and the evaluating parameters. A new method for “distilling free-form natural laws from experimental data” was adopted to find the best formula to determine the optimal sampling percentages. The results showed that coincidence rate of range (CR was the most valid evaluating parameter and was suitable to serve as a threshold to find the optimal sampling percentage. The principal component analysis showed that subcore collections constructed by the optimal sampling percentages calculated by present strategy were well representative.
Lechtman, E; Mashouf, S; Chattopadhyay, N; Keller, B M; Lai, P; Cai, Z; Reilly, R M; Pignol, J-P
2013-05-21
Radiosensitization using gold nanoparticles (AuNPs) has been shown to vary widely with cell line, irradiation energy, AuNP size, concentration and intracellular localization. We developed a Monte Carlo-based AuNP radiosensitization predictive model (ARP), which takes into account the detailed energy deposition at the nano-scale. This model was compared to experimental cell survival and macroscopic dose enhancement predictions. PC-3 prostate cancer cell survival was characterized after irradiation using a 300 kVp photon source with and without AuNPs present in the cell culture media. Detailed Monte Carlo simulations were conducted, producing individual tracks of photoelectric products escaping AuNPs and energy deposition was scored in nano-scale voxels in a model cell nucleus. Cell survival in our predictive model was calculated by integrating the radiation induced lethal event density over the nucleus volume. Experimental AuNP radiosensitization was observed with a sensitizer enhancement ratio (SER) of 1.21 ± 0.13. SERs estimated using the ARP model and the macroscopic enhancement model were 1.20 ± 0.12 and 1.07 ± 0.10 respectively. In the hypothetical case of AuNPs localized within the nucleus, the ARP model predicted a SER of 1.29 ± 0.13, demonstrating the influence of AuNP intracellular localization on radiosensitization.
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems
International Nuclear Information System (INIS)
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem
Simulation of Cone Beam CT System Based on Monte Carlo Method
Wang, Yu; Cao, Ruifen; Hu, Liqin; Li, Bingbing
2014-01-01
Adaptive Radiation Therapy (ART) was developed based on Image-guided Radiation Therapy (IGRT) and it is the trend of photon radiation therapy. To get a better use of Cone Beam CT (CBCT) images for ART, the CBCT system model was established based on Monte Carlo program and validated against the measurement. The BEAMnrc program was adopted to the KV x-ray tube. Both IOURCE-13 and ISOURCE-24 were chosen to simulate the path of beam particles. The measured Percentage Depth Dose (PDD) and lateral dose profiles under 1cm water were compared with the dose calculated by DOSXYZnrc program. The calculated PDD was better than 1% within the depth of 10cm. More than 85% points of calculated lateral dose profiles was within 2%. The correct CBCT system model helps to improve CBCT image quality for dose verification in ART and assess the CBCT image concomitant dose risk.
Sign learning kink-based (SiLK) quantum Monte Carlo for molecular systems
Ma, Xiaoyao; Loffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana
2015-01-01
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H$_{2}$O, N$_2$, and F$_2$ molecules. The method is based on Feynman's path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems.
Ma, Xiaoyao; Hall, Randall W; Löffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana
2016-01-01
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman's path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem. PMID:26747795
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems
Ma, Xiaoyao; Hall, Randall W.; Löffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana
2016-01-01
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman's path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Sign learning kink-based (SiLK) quantum Monte Carlo for molecular systems
Energy Technology Data Exchange (ETDEWEB)
Ma, Xiaoyao; Hall, Randall W.; Loffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana
2016-01-07
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems
Energy Technology Data Exchange (ETDEWEB)
Ma, Xiaoyao [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Hall, Randall W. [Department of Natural Sciences and Mathematics, Dominican University of California, San Rafael, California 94901 (United States); Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Löffler, Frank [Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Kowalski, Karol [William R. Wiley Environmental Molecular Sciences Laboratory, Battelle, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States)
2016-01-07
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H{sub 2}O, N{sub 2}, and F{sub 2} molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Study of CANDU thorium-based fuel cycles by deterministic and Monte Carlo methods
International Nuclear Information System (INIS)
In the framework of the Generation IV forum, there is a renewal of interest in self-sustainable thorium fuel cycles applied to various concepts such as Molten Salt Reactors [1, 2] or High Temperature Reactors [3, 4]. Precise evaluations of the U-233 production potential relying on existing reactors such as PWRs [5] or CANDUs [6] are hence necessary. As a consequence of its design (online refueling and D2O moderator in a thermal spectrum), the CANDU reactor has moreover an excellent neutron economy and consequently a high fissile conversion ratio [7]. For these reasons, we try here, with a shorter term view, to re-evaluate the economic competitiveness of once-through thorium-based fuel cycles in CANDU [8]. Two simulation tools are used: the deterministic Canadian cell code DRAGON [9] and MURE [10], a C++ tool for reactor evolution calculations based on the Monte Carlo code MCNP [11]. (authors)
Quasi-Monte Carlo Simulation-Based SFEM for Slope Reliability Analysis
Institute of Scientific and Technical Information of China (English)
Yu Yuzhen; Xie Liquan; Zhang Bingyin
2005-01-01
Considering the stochastic spatial variation of geotechnical parameters over the slope, a Stochastic Finite Element Method (SFEM) is established based on the combination of the Shear Strength Reduction (SSR) concept and quasi-Monte Carlo simulation. The shear strength reduction FEM is superior to the slice method based on the limit equilibrium theory in many ways, so it will be more powerful to assess the reliability of global slope stability when combined with probability theory. To illustrate the performance of the proposed method, it is applied to an example of simple slope. The results of simulation show that the proposed method is effective to perform the reliability analysis of global slope stability without presupposing a potential slip surface.
GGEMS-Brachy: GPU GEant4-based Monte Carlo simulation for brachytherapy applications
International Nuclear Information System (INIS)
In brachytherapy, plans are routinely calculated using the AAPM TG43 formalism which considers the patient as a simple water object. An accurate modeling of the physical processes considering patient heterogeneity using Monte Carlo simulation (MCS) methods is currently too time-consuming and computationally demanding to be routinely used. In this work we implemented and evaluated an accurate and fast MCS on Graphics Processing Units (GPU) for brachytherapy low dose rate (LDR) applications. A previously proposed Geant4 based MCS framework implemented on GPU (GGEMS) was extended to include a hybrid GPU navigator, allowing navigation within voxelized patient specific images and analytically modeled 125I seeds used in LDR brachytherapy. In addition, dose scoring based on track length estimator including uncertainty calculations was incorporated. The implemented GGEMS-brachy platform was validated using a comparison with Geant4 simulations and reference datasets. Finally, a comparative dosimetry study based on the current clinical standard (TG43) and the proposed platform was performed on twelve prostate cancer patients undergoing LDR brachytherapy. Considering patient 3D CT volumes of 400 × 250 × 65 voxels and an average of 58 implanted seeds, the mean patient dosimetry study run time for a 2% dose uncertainty was 9.35 s (≈500 ms 10−6 simulated particles) and 2.5 s when using one and four GPUs, respectively. The performance of the proposed GGEMS-brachy platform allows envisaging the use of Monte Carlo simulation based dosimetry studies in brachytherapy compatible with clinical practice. Although the proposed platform was evaluated for prostate cancer, it is equally applicable to other LDR brachytherapy clinical applications. Future extensions will allow its application in high dose rate brachytherapy applications. (paper)
MaGe - a Geant4-based Monte Carlo framework for low-background experiments
Chan, Yuen-Dat; Henning, Reyco; Gehman, Victor M; Johnson, Rob A; Jordan, David V; Kazkaz, Kareem; Knapp, Markus; Kroninger, Kevin; Lenz, Daniel; Liu, Jing; Liu, Xiang; Marino, Michael G; Mokhtarani, Akbar; Pandola, Luciano; Schubert, Alexis G; Tomei, Claudia
2008-01-01
A Monte Carlo framework, MaGe, has been developed based on the Geant4 simulation toolkit. Its purpose is to simulate physics processes in low-energy and low-background radiation detectors, specifically for the Majorana and Gerda $^{76}$Ge neutrinoless double-beta decay experiments. This jointly-developed tool is also used to verify the simulation of physics processes relevant to other low-background experiments in Geant4. The MaGe framework contains simulations of prototype experiments and test stands, and is easily extended to incorporate new geometries and configurations while still using the same verified physics processes, tunings, and code framework. This reduces duplication of efforts and improves the robustness of and confidence in the simulation output.
Auxiliary-field based trial wave functions in quantum Monte Carlo simulations
Chang, Chia-Chen; Rubenstein, Brenda; Morales, Miguel
We propose a simple scheme for generating correlated multi-determinant trial wave functions for quantum Monte Carlo algorithms. The method is based on the Hubbard-Stratonovich transformation which decouples a two-body Jastrow-type correlator into one-body projectors coupled to auxiliary fields. We apply the technique to generate stochastic representations of the Gutzwiller wave function, and present benchmark resuts for the ground state energy of the Hubbard model in one dimension. Extensions of the proposed scheme to chemical systems will also be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, 15-ERD-013.
Calculation and analysis of heat source of PWR assemblies based on Monte Carlo method
International Nuclear Information System (INIS)
When fission occurs in nuclear fuel in reactor core, it releases numerous neutron and γ radiation, which takes energy deposition in fuel components and yields many factors such as thermal stressing and radiation damage influencing the safe operation of a reactor. Using the three-dimensional Monte Carlo transport calculation program MCNP and continuous cross-section database based on ENDF/B series to calculate the heat rate of the heat source on reference assemblies of a PWR when loading with 18-month short refueling cycle mode, and get the precise values of the control rod, thimble plug and new burnable poison rod within Gd, so as to provide basis for reactor design and safety verification. (authors)
Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI
Lui, Dorothy; Haider, Masoom; Wong, Alexander
2015-01-01
Background: Prostate cancer is one of the most common forms of cancer found in males making early diagnosis important. Magnetic resonance imaging (MRI) has been useful in visualizing and localizing tumor candidates and with the use of endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The coils introduce intensity inhomogeneities and the surface coil intensity correction built into MRI scanners is used to reduce these inhomogeneities. However, the correction typically performed at the MRI scanner level leads to noise amplification and noise level variations. Methods: In this study, we introduce a new Monte Carlo-based noise compensation approach for coil intensity corrected endorectal MRI which allows for effective noise compensation and preservation of details within the prostate. The approach accounts for the ERC SNR profile via a spatially-adaptive noise model for correcting non-stationary noise variations. Such a method is useful particularly for improving the image quality of coil i...
GPU-based fast Monte Carlo simulation for radiotherapy dose calculation
Jia, Xun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B
2011-01-01
Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress towards the development a GPU-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original DPM code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. High performance random number generator and hardware linear interpolation are also utilized. We have also developed various components to hand...
International Nuclear Information System (INIS)
The internal radiation dose calculations based on Chinese models is important in nuclear medicine. Most of the existing models are based on the physical and anatomical data of Caucasian, whose anatomical structure and physiological parameters are quite different from the Chinese, may lead significant effect on internal radiation. Therefore, it is necessary to establish the model based on the Chinese ethnic characteristics, and applied to radiation dosimetry calculation. In this study, a voxel model was established based on the high resolution Visible Chinese Human (VCH). The transport procedure of photon and electron was simulated using the MCNPX Monte Carlo code. Absorbed fraction (AF) and specific absorbed fraction (SAF) were calculated and S-factors and mean absorbed doses for organs with 99mTc located in liver were also obtained. In comparison with those of VIP-Man and MIRD models, discrepancies were found to be correlated with the racial and anatomical differences in organ mass and inter-organ distance. The internal dosimetry data based on other models that were used to apply to Chinese adult population are replaced with Chinese specific data. The obtained results provide a reference for nuclear medicine, such as dose verification after surgery and potential radiation evaluation for radionuclides in preclinical research, etc. (authors)
A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry
International Nuclear Information System (INIS)
The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author)
Directory of Open Access Journals (Sweden)
Yu Hyeong
2010-12-01
Full Text Available Abstract Background The removal of high-abundance proteins from plasma is an efficient approach to investigating flow-through proteins for biomarker discovery studies. Most depletion methods are based on multiple immunoaffinity methods available commercially including LC columns and spin columns. Despite its usefulness, high-abundance depletion has an intrinsic problem, the sponge effect, which should be assessed during depletion experiments. Concurrently, the yield of depletion of high-abundance proteins must be monitored during the use of the depletion column. To date, there is no reasonable technique for measuring the recovery of flow-through proteins after depletion and assessing the capacity for capture of high-abundance proteins. Results In this study, we developed a method of measuring recovery yields of a multiple affinity removal system column easily and rapidly using enhanced green fluorescence protein as an indicator of flow-through proteins. Also, we monitored the capture efficiency through depletion of a high-abundance protein, albumin labeled with fluorescein isothiocyanate. Conclusion This simple method can be applied easily to common high-abundance protein depletion methods, effectively reducing experimental variations in biomarker discovery studies.
International Nuclear Information System (INIS)
We present a new Monte Carlo method based upon the theoretical proposal of Claverie and Soto. By contrast with other Quantum Monte Carlo methods used so far, the present approach uses a pure diffusion process without any branching. The many-fermion problem (with the specific constraint due to the Pauli principle) receives a natural solution in the framework of this method: in particular, there is neither the fixed-node approximation not the nodal release problem which occur in other approaches (see, e.g., Ref. 8 for a recent account). We give some numerical results concerning simple systems in order to illustrate the numerical feasibility of the proposed algorithm
International Nuclear Information System (INIS)
Pollutant nitrogen deposition effects on soil and foliar element concentrations were investigated in acidic and limestone grasslands, located in one of the most nitrogen and acid rain polluted regions of the UK, using plots treated for 8-10 years with 35-140 kg N ha-2 y-1 as NH4NO3. Historic data suggests both grasslands have acidified over the past 50 years. Nitrogen deposition treatments caused the grassland soils to lose 23-35% of their total available bases (Ca, Mg, K, and Na) and they became acidified by 0.2-0.4 pH units. Aluminium, iron and manganese were mobilised and taken up by limestone grassland forbs and were translocated down the acid grassland soil. Mineral nitrogen availability increased in both grasslands and many species showed foliar N enrichment. This study provides the first definitive evidence that nitrogen deposition depletes base cations from grassland soils. The resulting acidification, metal mobilisation and eutrophication are implicated in driving floristic changes. - Nitrogen deposition causes base cation depletion, acidification and eutrophication of semi-natural grassland soils
Development of a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport
Jia, Xun; Sempau, Josep; Choi, Dongju; Majumdar, Amitava; Jiang, Steve B
2009-01-01
Monte Carlo simulation is the most accurate method for absorbed dose calculations in radiotherapy. Its efficiency still requires improvement for routine clinical applications, especially for online adaptive radiotherapy. In this paper, we report our recent development on a GPU-based Monte Carlo dose calculation code for coupled electron-photon transport. We have implemented the Dose Planning Method (DPM) Monte Carlo dose calculation package (Sempau et al, Phys. Med. Biol., 45(2000)2263-2291) on GPU architecture under CUDA platform. The implementation has been tested with respect to the original sequential DPM code on CPU in two cases. Our results demonstrate the adequate accuracy of the GPU implementation for both electron and photon beams in radiotherapy energy range. A speed up factor of 4.5 and 5.5 times have been observed for electron and photon testing cases, respectively, using an NVIDIA Tesla C1060 GPU card against a 2.27GHz Intel Xeon CPU processor .
Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT
Directory of Open Access Journals (Sweden)
Weinmann Martin
2009-12-01
Full Text Available Abstract Background The purpose of the present study is to compare finite size pencil beam (fsPB and Monte Carlo (MC based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT. Materials and methods A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, density overwrite one phase static CT, average CT of the same patient. Both 6 and 15 MV beam energies were used. The resulting treatment plans were compared by how well they fulfilled the prescribed optimization constraints both for the dose distributions calculated on the static patient models and for the accumulated dose, recalculated with MC on each of 8 CTs of a 4DCT set. Results In the phantom measurements, the MC dose engine showed discrepancies Conclusions It is feasible to employ the MC dose engine for optimization of lung IMSRT and the plans are superior to fsPB. Use of static patient models introduces a bias in the MC dose distribution compared to the 4D MC recalculated dose, but this bias is predictable and therefore MC based optimization on static patient models is considered safe.
A new method for RGB to CIELAB color space transformation based on Markov chain Monte Carlo
Chen, Yajun; Liu, Ding; Liang, Junli
2013-10-01
During printing quality inspection, the inspection of color error is an important content. However, the RGB color space is device-dependent, usually RGB color captured from CCD camera must be transformed into CIELAB color space, which is perceptually uniform and device-independent. To cope with the problem, a Markov chain Monte Carlo (MCMC) based algorithms for the RGB to the CIELAB color space transformation is proposed in this paper. Firstly, the modeling color targets and testing color targets is established, respectively used in modeling and performance testing process. Secondly, we derive a Bayesian model for estimation the coefficients of a polynomial, which can be used to describe the relation between RGB and CIELAB color space. Thirdly, a Markov chain is set up base on Gibbs sampling algorithm (one of the MCMC algorithm) to estimate the coefficients of polynomial. Finally, the color difference of testing color targets is computed for evaluating the performance of the proposed method. The experimental results showed that the nonlinear polynomial regression based on MCMC algorithm is effective, whose performance is similar to the least square approach and can accurately model the RGB to the CIELAB color space conversion and guarantee the color error evaluation for printing quality inspection system.
Nanoscale Field Effect Optical Modulators Based on Depletion of Epsilon-Near-Zero Films
Lu, Zhaolin; Shi, Kaifeng
2015-01-01
The field effect in metal-oxide-semiconductor (MOS) capacitors plays a key role in field-effect transistors (FETs), which are the fundamental building blocks of modern digital integrated circuits. Recent works show that the field effect can also be used to make optical/plasmonic modulators. In this paper, we report field effect electro-absorption modulators (FEOMs) each made of an ultrathin epsilon-near-zero (ENZ) film, as the active material, sandwiched in a silicon or plasmonic waveguide. Without a bias, the ENZ film maximizes the attenuation of the waveguides and the modulators work at the OFF state; contrariwise, depletion of the carriers in the ENZ film greatly reduces the attenuation and the modulators work at the ON state. The double capacitor gating scheme is used to enhance the modulation by the field effect. According to our simulation, extinction ratio up to 3.44 dB can be achieved in a 500-nm long Si waveguide with insertion loss only 0.71 dB (85.0%); extinction ratio up to 7.86 dB can be achieved...
Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation
Directory of Open Access Journals (Sweden)
Yuan Xu
2014-03-01
Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this
Development of an unstructured mesh based geometry model in the Serpent 2 Monte Carlo code
International Nuclear Information System (INIS)
This paper presents a new unstructured mesh based geometry type, developed in the Serpent 2 Monte Carlo code as a by-product of another study related to multi-physics applications and coupling to CFD codes. The new geometry type is intended for the modeling of complicated and irregular objects, which are not easily constructed using the conventional CSG based approach. The capability is put to test by modeling the 'Stanford Critical Bunny' – a variation of a well-known 3D test case for methods used in the world of computer graphics. The results show that the geometry routine in Serpent 2 can handle the unstructured mesh, and that the use of delta-tracking results in a considerable reduction in the overall calculation time as the geometry is refined. The methodology is still very much under development, with the final goal of implementing a geometry routine capable of reading standardized geometry formats used by 3D design and imaging tools in industry and medical physics. (author)
Energy Technology Data Exchange (ETDEWEB)
Baba, Justin S [ORNL; John, Dwayne O [ORNL; Koju, Vijay [ORNL
2015-01-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
Saha, Krishnendu; Straus, Kenneth J; Chen, Yu; Glick, Stephen J
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran
Directory of Open Access Journals (Sweden)
Hamed Kargaran
2016-04-01
Full Text Available The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.
The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran
Kargaran, Hamed; Minuchehr, Abdolhamid; Zolfaghari, Ahmad
2016-04-01
The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.
TH-E-BRE-08: GPU-Monte Carlo Based Fast IMRT Plan Optimization
Energy Technology Data Exchange (ETDEWEB)
Li, Y; Tian, Z; Shi, F; Jiang, S; Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)
2014-06-15
Purpose: Intensity-modulated radiation treatment (IMRT) plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC) methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow. Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, rough beamlet dose calculations is conducted with only a small number of particles per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final Result. Results: For a lung case with 5317 beamlets, 10{sup 5} particles per beamlet in the first round, and 10{sup 8} particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec. Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.
Institute of Scientific and Technical Information of China (English)
Xu Xiao-Bo; Zhang He-Ming; Hu Hui-Yong; Ma Jian-Li; Xu Li-Jun
2011-01-01
The base-collector depletion capacitance for vertical SiGe npn heterojunction bipolar transistors (HBTs) on silicon on insulator (SOI) is split into vertical and lateral parts. This paper proposes a novel analytical depletion capacitance model of this structure for the first time. A large discrepancy is predicted when the present model is compared with the conventional depletion model, and it is shown that the capacitance decreases with the increase of the reverse collectorbase bias-and shows a kink as the reverse collector-base bias reaches the effective vertical punch-through voltage while the voltage differs with the collector doping concentrations, which is consistent with measurement results. The model can be employed for a fast evaluation of the depletion capacitance of an SOI SiGe HBT and has useful applications on the design and simulation of high performance SiGe circuits and devices.
Xu, Xiao-Bo; Zhang, He-Ming; Hu, Hui-Yong; Ma, Jian-Li; Xu, Li-Jun
2011-01-01
The base—collector depletion capacitance for vertical SiGe npn heterojunction bipolar transistors (HBTs) on silicon on insulator (SOI) is split into vertical and lateral parts. This paper proposes a novel analytical depletion capacitance model of this structure for the first time. A large discrepancy is predicted when the present model is compared with the conventional depletion model, and it is shown that the capacitance decreases with the increase of the reverse collector—base bias—and shows a kink as the reverse collector—base bias reaches the effective vertical punch-through voltage while the voltage differs with the collector doping concentrations, which is consistent with measurement results. The model can be employed for a fast evaluation of the depletion capacitance of an SOI SiGe HBT and has useful applications on the design and simulation of high performance SiGe circuits and devices.
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
International Nuclear Information System (INIS)
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)
2014-08-15
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
International Nuclear Information System (INIS)
This study aims to utilize a measurement-based Monte Carlo (MBMC) method to evaluate the accuracy of dose distributions calculated using the Eclipse radiotherapy treatment planning system (TPS) based on the anisotropic analytical algorithm. Dose distributions were calculated for the nasopharyngeal carcinoma (NPC) patients treated with the intensity modulated radiotherapy (IMRT). Ten NPC IMRT plans were evaluated by comparing their dose distributions with those obtained from the in-house MBMC programs for the same CT images and beam geometry. To reconstruct the fluence distribution of the IMRT field, an efficiency map was obtained by dividing the energy fluence of the intensity modulated field by that of the open field, both acquired from an aS1000 electronic portal imaging device. The integrated image of the non-gated mode was used to acquire the full dose distribution delivered during the IMRT treatment. This efficiency map redistributed the particle weightings of the open field phase-space file for IMRT applications. Dose differences were observed in the tumor and air cavity boundary. The mean difference between MBMC and TPS in terms of the planning target volume coverage was 0.6% (range: 0.0–2.3%). The mean difference for the conformity index was 0.01 (range: 0.0–0.01). In conclusion, the MBMC method serves as an independent IMRT dose verification tool in a clinical setting. - Highlights: ► The patient-based Monte Carlo method serves as a reference standard to verify IMRT doses. ► 3D Dose distributions for NPC patients have been verified by the Monte Carlo method. ► Doses predicted by the Monte Carlo method matched closely with those by the TPS. ► The Monte Carlo method predicted a higher mean dose to the middle ears than the TPS. ► Critical organ doses should be confirmed to avoid overdose to normal organs
DEFF Research Database (Denmark)
Klösgen, Beate; Bruun, Sara; Hansen, Søren;
with an AFM (2). The intuitive explanation for the depletion based on "hydrophobic mismatch" between the obviously hydrophilic bulk phase of water next to the hydrophobic polymer. It would thus be an intrinsic property of all interfaces between non-matching materials. The detailed physical interaction path...... The presence of a depletion layer of water along extended hydrophobic interfaces, and a possibly related formation of nanobubbles, is an ongoing discussion. The phenomenon was initially reported when we, years ago, chose thick films (~300-400Å) of polystyrene as cushions between a crystalline...
DEFF Research Database (Denmark)
Klösgen, Beate; Bruun, Sara; Hansen, Søren;
with an AFM (2). The intuitive explanation for the depletion based on "hydrophobic mismatch" between the obviously hydrophilic bulk phase of water next to the hydrophobic polymer. It would thus be an intrinsic property of all interfaces between non-matching materials. The detailed physical interaction path...... The presence of a depletion layer of water along extended hydrophobic interfaces, and a possibly related formation of nanobubbles, is an ongoing discussion. The phenomenon was initially reported when we, years ago, chose thick films (~300-400Å) of polystyrene as cushions between a crystalline...
Kudrolli, Haris A.
2001-04-01
A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates
A global reaction route mapping-based kinetic Monte Carlo algorithm.
Mitchell, Izaac; Irle, Stephan; Page, Alister J
2016-07-14
We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential. PMID:27421395
Monte Carlo simulation of novel breast imaging modalities based on coherent x-ray scattering
Ghammraoui, Bahaa; Badal, Andreu
2014-07-01
We present upgraded versions of MC-GPU and penEasy_Imaging, two open-source Monte Carlo codes for the simulation of radiographic projections and CT, that have been extended and validated to account for the effect of molecular interference in the coherent x-ray scatter. The codes were first validation by comparison between simulated and measured energy dispersive x-ray diffraction (EDXRD) spectra. A second validation was by evaluation of the rejection factor of a focused anti-scatter grid. To exemplify the capabilities of the new codes, the modified MC-GPU code was used to examine the possibility of characterizing breast tissue composition and microcalcifications in a volume of interest inside a whole breast phantom using EDXRD and to simulate a coherent scatter computed tomography (CSCT) system based on first generation CT acquisition geometry. It was confirmed that EDXRD and CSCT have the potential to characterize tissue composition inside a whole breast. The GPU-accelerated code was able to simulate, in just a few hours, a complete CSCT acquisition composed of 9758 independent pencil-beam projections. In summary, it has been shown that the presented software can be used for fast and accurate simulation of novel breast imaging modalities relying on scattering measurements and therefore can assist in the characterization and optimization of promising modalities currently under development.
A global reaction route mapping-based kinetic Monte Carlo algorithm
Mitchell, Izaac; Irle, Stephan; Page, Alister J.
2016-07-01
We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.
IVF cycle cost estimation using Activity Based Costing and Monte Carlo simulation.
Cassettari, Lucia; Mosca, Marco; Mosca, Roberto; Rolando, Fabio; Costa, Mauro; Pisaturo, Valerio
2016-03-01
The Authors present a new methodological approach in stochastic regime to determine the actual costs of an healthcare process. The paper specifically shows the application of the methodology for the determination of the cost of an Assisted reproductive technology (ART) treatment in Italy. The reason of this research comes from the fact that deterministic regime is inadequate to implement an accurate estimate of the cost of this particular treatment. In fact the durations of the different activities involved are unfixed and described by means of frequency distributions. Hence the need to determine in addition to the mean value of the cost, the interval within which it is intended to vary with a known confidence level. Consequently the cost obtained for each type of cycle investigated (in vitro fertilization and embryo transfer with or without intracytoplasmic sperm injection), shows tolerance intervals around the mean value sufficiently restricted as to make the data obtained statistically robust and therefore usable also as reference for any benchmark with other Countries. It should be noted that under a methodological point of view the approach was rigorous. In fact it was used both the technique of Activity Based Costing for determining the cost of individual activities of the process both the Monte Carlo simulation, with control of experimental error, for the construction of the tolerance intervals on the final result.
Monte Carlo based time-domain Hspice noise simulation for CSA-CRRC circuit
International Nuclear Information System (INIS)
We present a time-domain Monte Carlo based Hspice noise simulation for a charge-sensitive preamplifier-CRRC (CSA-CRRC) circuit with random amplitude piecewise noise waveform. The amplitude distribution of thermal noise is modeled with Gaussian random number. For 1/f noise, its amplitude distribution is modeled with several low-pass filters with thermal noise generators. These time-domain noise sources are connected in parallel with the drain and source nodes of the CMOS input transistor of CSA. The Hspice simulation of the CSA-CRRC circuit with these noise sources yielded ENC values at the output node of the shaper for thermal and 1/f noise of 47e- and 732e-, respectively. ENC values calculated from the frequency-domain transfer function and its integration are 44e- and 882e-, respectively. The values for Hspice simulation are similar to those for frequency-domain calculation. A test chip was designed and fabricated for this study. The measured ENC value was 904 e-. This study shows that the time-domain noise modeling is valid and the transient Hspice noise simulation can be an effective tool for low-noise circuit design
IVF cycle cost estimation using Activity Based Costing and Monte Carlo simulation.
Cassettari, Lucia; Mosca, Marco; Mosca, Roberto; Rolando, Fabio; Costa, Mauro; Pisaturo, Valerio
2016-03-01
The Authors present a new methodological approach in stochastic regime to determine the actual costs of an healthcare process. The paper specifically shows the application of the methodology for the determination of the cost of an Assisted reproductive technology (ART) treatment in Italy. The reason of this research comes from the fact that deterministic regime is inadequate to implement an accurate estimate of the cost of this particular treatment. In fact the durations of the different activities involved are unfixed and described by means of frequency distributions. Hence the need to determine in addition to the mean value of the cost, the interval within which it is intended to vary with a known confidence level. Consequently the cost obtained for each type of cycle investigated (in vitro fertilization and embryo transfer with or without intracytoplasmic sperm injection), shows tolerance intervals around the mean value sufficiently restricted as to make the data obtained statistically robust and therefore usable also as reference for any benchmark with other Countries. It should be noted that under a methodological point of view the approach was rigorous. In fact it was used both the technique of Activity Based Costing for determining the cost of individual activities of the process both the Monte Carlo simulation, with control of experimental error, for the construction of the tolerance intervals on the final result. PMID:24752546
Monte Carlo efficiency calibration of a neutron generator-based total-body irradiator
International Nuclear Information System (INIS)
Many body composition measurement systems are calibrated against a single-sized reference phantom. Prompt-gamma neutron activation (PGNA) provides the only direct measure of total body nitrogen (TBN), an index of the body's lean tissue mass. In PGNA systems, body size influences neutron flux attenuation, induced gamma signal distribution, and counting efficiency. Thus, calibration based on a single-sized phantom could result in inaccurate TBN values. We used Monte Carlo simulations (MCNP-5; Los Alamos National Laboratory) in order to map a system's response to the range of body weights (65-160 kg) and body fat distributions (25-60%) in obese humans. Calibration curves were constructed to derive body-size correction factors relative to a standard reference phantom, providing customized adjustments to account for differences in body habitus of obese adults. The use of MCNP-generated calibration curves should allow for a better estimate of the true changes in lean tissue mass that many occur during intervention programs focused only on weight loss. (author)
Monte Carlo vs. Pencil Beam based optimization of stereotactic lung IMRT
International Nuclear Information System (INIS)
The purpose of the present study is to compare finite size pencil beam (fsPB) and Monte Carlo (MC) based optimization of lung intensity-modulated stereotactic radiotherapy (lung IMSRT). A fsPB and a MC algorithm as implemented in a biological IMRT planning system were validated by film measurements in a static lung phantom. Then, they were applied for static lung IMSRT planning based on three different geometrical patient models (one phase static CT, density overwrite one phase static CT, average CT) of the same patient. Both 6 and 15 MV beam energies were used. The resulting treatment plans were compared by how well they fulfilled the prescribed optimization constraints both for the dose distributions calculated on the static patient models and for the accumulated dose, recalculated with MC on each of 8 CTs of a 4DCT set. In the phantom measurements, the MC dose engine showed discrepancies < 2%, while the fsPB dose engine showed discrepancies of up to 8% in the presence of lateral electron disequilibrium in the target. In the patient plan optimization, this translates into violations of organ at risk constraints and unpredictable target doses for the fsPB optimized plans. For the 4D MC recalculated dose distribution, MC optimized plans always underestimate the target doses, but the organ at risk doses were comparable. The results depend on the static patient model, and the smallest discrepancy was found for the MC optimized plan on the density overwrite one phase static CT model. It is feasible to employ the MC dose engine for optimization of lung IMSRT and the plans are superior to fsPB. Use of static patient models introduces a bias in the MC dose distribution compared to the 4D MC recalculated dose, but this bias is predictable and therefore MC based optimization on static patient models is considered safe
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75–2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Eutrophication of mangroves linked to depletion of foliar and soil base cations.
Fauzi, Anas; Skidmore, Andrew K; Heitkönig, Ignas M A; van Gils, Hein; Schlerf, Martin
2014-12-01
There is growing concern that increasing eutrophication causes degradation of coastal ecosystems. Studies in terrestrial ecosystems have shown that increasing the concentration of nitrogen in soils contributes to the acidification process, which leads to leaching of base cations. To test the effects of eutrophication on the availability of base cations in mangroves, we compared paired leaf and soil nutrient levels sampled in Nypa fruticans and Rhizophora spp. on a severely disturbed, i.e. nutrient loaded, site (Mahakam delta) with samples from an undisturbed, near-pristine site (Berau delta) in East Kalimantan, Indonesia. The findings indicate that under pristine conditions, the availability of base cations in mangrove soils is determined largely by salinity. Anthropogenic disturbances on the Mahakam site have resulted in eutrophication, which is related to lower levels of foliar and soil base cations. Path analysis suggests that increasing soil nitrogen reduces soil pH, which in turn reduces the levels of foliar and soil base cations in mangroves.
Directory of Open Access Journals (Sweden)
Vahid Moslemi
2011-03-01
Full Text Available Introduction: In brachytherapy, radioactive sources are placed close to the tumor, therefore, small changes in their positions can cause large changes in the dose distribution. This emphasizes the need for computerized treatment planning. The usual method for treatment planning of cervix brachytherapy uses conventional radiographs in the Manchester system. Nowadays, because of their advantages in locating the source positions and the surrounding tissues, CT and MRI images are replacing conventional radiographs. In this study, we used CT images in Monte Carlo based dose calculation for brachytherapy treatment planning, using an interface software to create the geometry file required in the MCNP code. The aim of using the interface software is to facilitate and speed up the geometry set-up for simulations based on the patient’s anatomy. This paper examines the feasibility of this method in cervix brachytherapy and assesses its accuracy and speed. Material and Methods: For dosimetric measurements regarding the treatment plan, a pelvic phantom was made from polyethylene in which the treatment applicators could be placed. For simulations using CT images, the phantom was scanned at 120 kVp. Using an interface software written in MATLAB, the CT images were converted into MCNP input file and the simulation was then performed. Results: Using the interface software, preparation time for the simulations of the applicator and surrounding structures was approximately 3 minutes; the corresponding time needed in the conventional MCNP geometry entry being approximately 1 hour. The discrepancy in the simulated and measured doses to point A was 1.7% of the prescribed dose. The corresponding dose differences between the two methods in rectum and bladder were 3.0% and 3.7% of the prescribed dose, respectively. Comparing the results of simulation using the interface software with those of simulation using the standard MCNP geometry entry showed a less than 1
Li, Yongbao; Tian, Zhen; Shi, Feng; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2015-04-01
Intensity-modulated radiation treatment (IMRT) plan optimization needs beamlet dose distributions. Pencil-beam or superposition/convolution type algorithms are typically used because of their high computational speed. However, inaccurate beamlet dose distributions may mislead the optimization process and hinder the resulting plan quality. To solve this problem, the Monte Carlo (MC) simulation method has been used to compute all beamlet doses prior to the optimization step. The conventional approach samples the same number of particles from each beamlet. Yet this is not the optimal use of MC in this problem. In fact, there are beamlets that have very small intensities after solving the plan optimization problem. For those beamlets, it may be possible to use fewer particles in dose calculations to increase efficiency. Based on this idea, we have developed a new MC-based IMRT plan optimization framework that iteratively performs MC dose calculation and plan optimization. At each dose calculation step, the particle numbers for beamlets were adjusted based on the beamlet intensities obtained through solving the plan optimization problem in the last iteration step. We modified a GPU-based MC dose engine to allow simultaneous computations of a large number of beamlet doses. To test the accuracy of our modified dose engine, we compared the dose from a broad beam and the summed beamlet doses in this beam in an inhomogeneous phantom. Agreement within 1% for the maximum difference and 0.55% for the average difference was observed. We then validated the proposed MC-based optimization schemes in one lung IMRT case. It was found that the conventional scheme required 106 particles from each beamlet to achieve an optimization result that was 3% difference in fluence map and 1% difference in dose from the ground truth. In contrast, the proposed scheme achieved the same level of accuracy with on average 1.2 × 105 particles per beamlet. Correspondingly, the computation time
A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system
International Nuclear Information System (INIS)
Purpose: Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. Methods: An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. Results: For relatively large and complex three-field head and neck cases, i.e., >100 000 spots with a target volume of ∼1000 cm3 and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. Conclusions: A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45 000
A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system
Energy Technology Data Exchange (ETDEWEB)
Ma, Jiasen, E-mail: ma.jiasen@mayo.edu; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G. [Department of Radiation Oncology, Division of Medical Physics, Mayo Clinic, 200 First Street Southwest, Rochester, Minnesota 55905 (United States)
2014-12-15
Purpose: Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. Methods: An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. Results: For relatively large and complex three-field head and neck cases, i.e., >100 000 spots with a target volume of ∼1000 cm{sup 3} and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. Conclusions: A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45
Monte-Carlo simulation of an ultra small-angle neutron scattering instrument based on Soller slits
Energy Technology Data Exchange (ETDEWEB)
Rieker, T. [Univ. of New Mexico, Albuquerque, NM (United States); Hubbard, P. [Sandia National Labs., Albuquerque, NM (United States)
1997-09-01
Monte Carlo simulations are used to investigate an ultra small-angle neutron scattering instrument for use at a pulsed source based on a Soller slit collimator and analyzer. The simulations show that for a q{sub min} of {approximately}le-4 {angstrom}{sup -1} (15 {angstrom} neutrons) a few tenths of a percent of the incident flux is transmitted through both collimators at q=0.
GPU-based fast Monte Carlo dose calculation for proton therapy
Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B.
2012-12-01
Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ˜1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.
Cell death following BNCT: a theoretical approach based on Monte Carlo simulations.
Ballarini, F; Bakeine, J; Bortolussi, S; Bruschi, P; Cansolino, L; Clerici, A M; Ferrari, C; Protti, N; Stella, S; Zonta, A; Zonta, C; Altieri, S
2011-12-01
In parallel to boron measurements and animal studies, investigations on radiation-induced cell death are also in progress in Pavia, with the aim of better characterisation of the effects of a BNCT treatment down to the cellular level. Such studies are being carried out not only experimentally but also theoretically, based on a mechanistic model and a Monte Carlo code. Such model assumes that: (1) only clustered DNA strand breaks can lead to chromosome aberrations; (2) only chromosome fragments within a certain threshold distance can undergo misrejoining; (3) the so-called "lethal aberrations" (dicentrics, rings and large deletions) lead to cell death. After applying the model to normal cells exposed to monochromatic fields of different radiation types, the irradiation section of the code was purposely extended to mimic the cell exposure to a mixed radiation field produced by the (10)B(n,α) (7)Li reaction, which gives rise to alpha particles and Li ions of short range and high biological effectiveness, and by the (14)N(n,p)(14)C reaction, which produces 0.58 MeV protons. Very good agreement between model predictions and literature data was found for human and animal cells exposed to X- or gamma-rays, protons and alpha particles, thus allowing to validate the model for cell death induced by monochromatic radiation fields. The model predictions showed good agreement also with experimental data obtained by our group exposing DHD cells to thermal neutrons in the TRIGA Mark II reactor of the University of Pavia; this allowed to validate the model also for a BNCT exposure scenario, providing a useful predictive tool to bridge the gap between irradiation and cell death. PMID:21481595
Eutrophication of mangroves linked to depletion of foliar and soil base cations
Fauzi, A.; Skidmore, A.K.; Heitkonig, I.M.A.; Gils, van H.; Schlerf, M.
2014-01-01
There is growing concern that increasing eutrophication causes degradation of coastal ecosystems. Studies in terrestrial ecosystems have shown that increasing the concentration of nitrogen in soils contributes to the acidification process, which leads to leaching of base cations. To test the effects
Ding, George X.; Duggan, Dennis M.; Coffey, Charles W.; Shokrani, Parvaneh; Cygler, Joanna E.
2006-06-01
The purpose of this study is to present our experience of commissioning, testing and use of the first commercial macro Monte Carlo based dose calculation algorithm for electron beam treatment planning and to investigate new issues regarding dose reporting (dose-to-water versus dose-to-medium) as well as statistical uncertainties for the calculations arising when Monte Carlo based systems are used in patient dose calculations. All phantoms studied were obtained by CT scan. The calculated dose distributions and monitor units were validated against measurements with film and ionization chambers in phantoms containing two-dimensional (2D) and three-dimensional (3D) type low- and high-density inhomogeneities at different source-to-surface distances. Beam energies ranged from 6 to 18 MeV. New required experimental input data for commissioning are presented. The result of validation shows an excellent agreement between calculated and measured dose distributions. The calculated monitor units were within 2% of measured values except in the case of a 6 MeV beam and small cutout fields at extended SSDs (>110 cm). The investigation on the new issue of dose reporting demonstrates the differences up to 4% for lung and 12% for bone when 'dose-to-medium' is calculated and reported instead of 'dose-to-water' as done in a conventional system. The accuracy of the Monte Carlo calculation is shown to be clinically acceptable even for very complex 3D-type inhomogeneities. As Monte Carlo based treatment planning systems begin to enter clinical practice, new issues, such as dose reporting and statistical variations, may be clinically significant. Therefore it is imperative that a consistent approach to dose reporting is used.
Monte Carlo-based QA for IMRT of head and neck cancers
Tang, F.; Sham, J.; Ma, C.-M.; Li, J.-S.
2007-06-01
It is well-known that the presence of large air cavity in a dense medium (or patient) introduces significant electronic disequilibrium when irradiated with megavoltage X-ray field. This condition may worsen by the possible use of tiny beamlets in intensity-modulated radiation therapy (IMRT). Commercial treatment planning systems (TPSs), in particular those based on the pencil-beam method, do not provide accurate dose computation for the lungs and other cavity-laden body sites such as the head and neck. In this paper we present the use of Monte Carlo (MC) technique for dose re-calculation of IMRT of head and neck cancers. In our clinic, a turn-key software system is set up for MC calculation and comparison with TPS-calculated treatment plans as part of the quality assurance (QA) programme for IMRT delivery. A set of 10 off-the-self PCs is employed as the MC calculation engine with treatment plan parameters imported from the TPS via a graphical user interface (GUI) which also provides a platform for launching remote MC simulation and subsequent dose comparison with the TPS. The TPS-segmented intensity maps are used as input for the simulation hence skipping the time-consuming simulation of the multi-leaf collimator (MLC). The primary objective of this approach is to assess the accuracy of the TPS calculations in the presence of air cavities in the head and neck whereas the accuracy of leaf segmentation is verified by fluence measurement using a fluoroscopic camera-based imaging device. This measurement can also validate the correct transfer of intensity maps to the record and verify system. Comparisons between TPS and MC calculations of 6 MV IMRT for typical head and neck treatments review regional consistency in dose distribution except at and around the sinuses where our pencil-beam-based TPS sometimes over-predicts the dose by up to 10%, depending on the size of the cavities. In addition, dose re-buildup of up to 4% is observed at the posterior nasopharyngeal
Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope
Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao
2015-10-01
X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through
The information-based complexity of approximation problem by adaptive Monte Carlo methods
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper, we study the complexity of information of approximation problem on the multivariate Sobolev space with bounded mixed derivative MWpr,α(Td), 1 < p < ∞, in the norm of Lq(Td), 1 < q < ∞, by adaptive Monte Carlo methods. Applying the discretization technique and some properties of pseudo-s-scale, we determine the exact asymptotic orders of this problem.
International Nuclear Information System (INIS)
Geometry navigation plays the most fundamental role in Monte Carlo particle transport simulation. It's mainly responsible for locating a particle inside which geometry volume it is and computing the distance to the volume boundary along the certain particle trajectory during each particle history. Geometry navigation directly affects the run-time performance of the Monte Carlo particle transport simulation, especially for large scale complicated systems. Two geometry acceleration algorithms, the automatic neighbor search algorithm and the oriented bounding box algorithm, are presented for improving geometry navigation performance. The algorithms have been implemented in the Super Monte Carlo Calculation Program for Nuclear and Radiation Process (SuperMC) version 2.0. The FDS-II and ITER benchmark models have been tested to highlight the efficiency gains that can be achieved by using the acceleration algorithms. The exact gains may be problem dependent, but testing results showed that runtime of Monte Carlo simulation can be considerably reduced 50%∼60% with the proposed acceleration algorithms. (author)
van der Graaf, E. R.; Limburg, J.; Koomans, R. L.; Tijs, M.
2011-01-01
The calibration of scintillation detectors for gamma radiation in a well characterized setup can be transferred to other geometries using Monte Carlo simulations to account for the differences between the calibration and the other geometry. In this study a calibration facility was used that is const
Energy Technology Data Exchange (ETDEWEB)
Sihler, Holger [Institute of Environmental Physics, University of Heidelberg (Germany); Max-Planck-Institute for Chemistry, Mainz (Germany); Friess, Udo; Platt, Ulrich [Institute of Environmental Physics, University of Heidelberg (Germany); Wagner, Thomas [Max-Planck-Institute for Chemistry, Mainz (Germany)
2010-07-01
Bromine monoxide (BrO) radicals are known to play an important role in the chemistry of the springtime polar troposphere. Their release by halogen activation processes leads to the almost complete destruction of near-surface ozone during ozone depletion events ODEs. In order to improve our understanding of the halogen activation processes in three dimensions, we combine active and passive ground-based and satellite-borne measurements of BrO radicals. While satellites can not resolve the vertical distribution and have rather coarse horizontal resolution, they may provide information on the large-scale horizontal distribution. Information on the spatial variability within a satellite pixel may be derived from our combined ground-based instrumentation. Simultaneous passive multi-axis differential optical absorption spectroscopy (MAX-DOAS) and active long-path DOAS (LP-DOAS) measurements were conducted during the jointly organised OASIS campaign in Barrow, Alaska during Spring 2009 within the scope of the International Polar Year (IPY). Ground-based measurements are compared to BrO column densities measured by GOME-2 in order to find a conclusive picture of the spatial pattern of bromine activation.
Directory of Open Access Journals (Sweden)
Xueli Chen
2010-01-01
Full Text Available During the past decade, Monte Carlo method has obtained wide applications in optical imaging to simulate photon transport process inside tissues. However, this method has not been effectively extended to the simulation of free-space photon transport at present. In this paper, a uniform framework for noncontact optical imaging is proposed based on Monte Carlo method, which consists of the simulation of photon transport both in tissues and in free space. Specifically, the simplification theory of lens system is utilized to model the camera lens equipped in the optical imaging system, and Monte Carlo method is employed to describe the energy transformation from the tissue surface to the CCD camera. Also, the focusing effect of camera lens is considered to establish the relationship of corresponding points between tissue surface and CCD camera. Furthermore, a parallel version of the framework is realized, making the simulation much more convenient and effective. The feasibility of the uniform framework and the effectiveness of the parallel version are demonstrated with a cylindrical phantom based on real experimental results.
Dual-energy CT-based material extraction for tissue segmentation in Monte Carlo dose calculations
Bazalova, Magdalena; Carrier, Jean-François; Beaulieu, Luc; Verhaegen, Frank
2008-05-01
Monte Carlo (MC) dose calculations are performed on patient geometries derived from computed tomography (CT) images. For most available MC codes, the Hounsfield units (HU) in each voxel of a CT image have to be converted into mass density (ρ) and material type. This is typically done with a (HU; ρ) calibration curve which may lead to mis-assignment of media. In this work, an improved material segmentation using dual-energy CT-based material extraction is presented. For this purpose, the differences in extracted effective atomic numbers Z and the relative electron densities ρe of each voxel are used. Dual-energy CT material extraction based on parametrization of the linear attenuation coefficient for 17 tissue-equivalent inserts inside a solid water phantom was done. Scans of the phantom were acquired at 100 kVp and 140 kVp from which Z and ρe values of each insert were derived. The mean errors on Z and ρe extraction were 2.8% and 1.8%, respectively. Phantom dose calculations were performed for 250 kVp and 18 MV photon beams and an 18 MeV electron beam in the EGSnrc/DOSXYZnrc code. Two material assignments were used: the conventional (HU; ρ) and the novel (HU; ρ, Z) dual-energy CT tissue segmentation. The dose calculation errors using the conventional tissue segmentation were as high as 17% in a mis-assigned soft bone tissue-equivalent material for the 250 kVp photon beam. Similarly, the errors for the 18 MeV electron beam and the 18 MV photon beam were up to 6% and 3% in some mis-assigned media. The assignment of all tissue-equivalent inserts was accurate using the novel dual-energy CT material assignment. As a result, the dose calculation errors were below 1% in all beam arrangements. Comparable improvement in dose calculation accuracy is expected for human tissues. The dual-energy tissue segmentation offers a significantly higher accuracy compared to the conventional single-energy segmentation.
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark
International Nuclear Information System (INIS)
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.
Renner, F; Wulff, J; Kapsch, R-P; Zink, K
2015-10-01
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Vrshek-Schallhorn, Suzanne; Wahlstrom, Dustin; White, Tonya; Luciana, Monica
2013-04-01
Despite interest in dopamine's role in emotion-based decision-making, few reports of the effects of dopamine manipulations are available in this area in humans. This study investigates dopamine's role in emotion-based decision-making through a common measure of this construct, the Iowa Gambling Task (IGT), using Acute Tyrosine Phenylalanine Depletion (ATPD). In a between-subjects design, 40 healthy adults were randomized to receive either an ATPD beverage or a balanced amino acid beverage (a control) prior to completing the IGT, as well as pre- and post-manipulation blood draws for the neurohormone prolactin. Together with conventional IGT performance metrics, choice selections and response latencies were examined separately for good and bad choices before and after several key punishment events. Changes in response latencies were also used to predict total task performance. Prolactin levels increased significantly in the ATPD group but not in the control group. However, no significant group differences in performance metrics were detected, nor were there sex differences in outcome measures. However, the balanced group's bad deck latencies speeded up across the task, while the ATPD group's latencies remained adaptively hesitant. Additionally, modulation of latencies to the bad decks predicted total score for the ATPD group only. One interpretation is that ATPD subtly attenuated reward salience and altered the approach by which individuals achieved successful performance, without resulting in frank group differences in task performance.
Miller, A C; Blakely, W F; Livengood, D; Whittaker, T; Xu, J.; Ejnik, J W; Hamilton, M. M.; Parlette, E; John, T S; Gerstenberg, H M; Hsu, H
1998-01-01
Depleted uranium (DU) is a dense heavy metal used primarily in military applications. Although the health effects of occupational uranium exposure are well known, limited data exist regarding the long-term health effects of internalized DU in humans. We established an in vitro cellular model to study DU exposure. Microdosimetric assessment, determined using a Monte Carlo computer simulation based on measured intracellular and extracellular uranium levels, showed that few (0.0014%) cell nuclei...
The probability distribution of the predicted CFM-induced ozone depletion. [Chlorofluoromethane
Ehhalt, D. H.; Chang, J. S.; Bulter, D. M.
1979-01-01
It is argued from the central limit theorem that the uncertainty in model predicted changes of the ozone column density is best represented by a normal probability density distribution. This conclusion is validated by comparison with a probability distribution generated by a Monte Carlo technique. In the case of the CFM-induced ozone depletion, and based on the estimated uncertainties in the reaction rate coefficients alone the relative mean standard deviation of this normal distribution is estimated to be 0.29.
Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K
2011-12-01
Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation.
Using a Monte-Carlo-based approach to evaluate the uncertainty on fringe projection technique
Molimard, Jérôme
2013-01-01
A complete uncertainty analysis on a given fringe projection set-up has been performed using Monte-Carlo approach. In particular the calibration procedure is taken into account. Two applications are given: at a macroscopic scale, phase noise is predominant whilst at microscopic scale, both phase noise and calibration errors are important. Finally, uncertainty found at macroscopic scale is close to some experimental tests (~100 {\\mu}m).
A Monte Carlo method based on antithetic variates for network reliability computations
El Khadiri, Mohamed; Rubino, Gerardo
1992-01-01
The exact evaluation of usual reliability measures of communication networks is seriously limited because of the excessive computational time usually needed to obtain them. In the general case, the computation of almost all the interesting reliability metrics are NP-hard problems. An alternative approach is to estimate them by means of a Monte Carlo simulation. This allows to deal with larger models than those that can be evaluated exactly. In this paper, we propose an algorithm much more per...
Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.
2016-03-01
We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.
Energy Technology Data Exchange (ETDEWEB)
Rivard, Mark J.; Melhus, Christopher S.; Granero, Domingo; Perez-Calatayud, Jose; Ballester, Facundo [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Radiation Oncology Department, Physics Section, ' ' La Fe' ' University Hospital, Avenida Campanar 21, E-46009 Valencia (Spain); Department of Atomic, Molecular, and Nuclear Physics, University of Valencia, C/Dr. Moliner 50, E-46100 Burjassot, Spain and IFIC (University of Valencia-CSIC), C/Dr. Moliner 50, E-46100 Burjassot (Spain)
2009-06-15
Certain brachytherapy dose distributions, such as those for LDR prostate implants, are readily modeled by treatment planning systems (TPS) that use the superposition principle of individual seed dose distributions to calculate the total dose distribution. However, dose distributions for brachytherapy treatments using high-Z shields or having significant material heterogeneities are not currently well modeled using conventional TPS. The purpose of this study is to establish a new treatment planning technique (Tufts technique) that could be applied in some clinical situations where the conventional approach is not acceptable and dose distributions present cylindrical symmetry. Dose distributions from complex brachytherapy source configurations determined with Monte Carlo methods were used as input data. These source distributions included the 2 and 3 cm diameter Valencia skin applicators from Nucletron, 4-8 cm diameter AccuBoost peripheral breast brachytherapy applicators from Advanced Radiation Therapy, and a 16 mm COMS-based eye plaque using {sup 103}Pd, {sup 125}I, and {sup 131}Cs seeds. Radial dose functions and 2D anisotropy functions were obtained by positioning the coordinate system origin along the dose distribution cylindrical axis of symmetry. Origin:tissue distance and active length were chosen to minimize TPS interpolation errors. Dosimetry parameters were entered into the PINNACLE TPS, and dose distributions were subsequently calculated and compared to the original Monte Carlo-derived dose distributions. The new planning technique was able to reproduce brachytherapy dose distributions for all three applicator types, producing dosimetric agreement typically within 2% when compared with Monte Carlo-derived dose distributions. Agreement between Monte Carlo-derived and planned dose distributions improved as the spatial resolution of the fitted dosimetry parameters improved. For agreement within 5% throughout the clinical volume, spatial resolution of
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun
2015-10-01
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz
2014-05-01
Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape
Mommen, G.P.M.; Waterbeemd, van de B.; Meiring, H.D.; Kersten, G.; Heck, A.J.R.; Jong, de A.P.J.M.
2012-01-01
A positional proteomics strategy for global N-proteome analysis is presented based on phospho tagging (PTAG) of internal peptides followed by depletion by titanium dioxide (TiO2) affinity chromatography. Therefore, N-terminal and lysine amino groups are initially completely dimethylated with formald
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
Generation of scintigraphic images in a virtual dosimetry trial based on Monte Carlo modelling
International Nuclear Information System (INIS)
Full text of publication follows. Aim: the purpose of dosimetry calculations in therapeutic nuclear medicine is to maximize tumour absorbed dose while minimizing normal tissue toxicities. However a wide heterogeneity of dosimetric approaches is observed: there is no standardized dosimetric protocol to date. The DosiTest project (www.dositest.com) intends to identify critical steps in the dosimetry chain by implementing clinical dosimetry in different Nuclear Medicine departments, on scintigraphic images generated by Monte Carlo simulation from a same virtual patient. This study aims at presenting the different steps contributing to image generation, following the imaging protocol of a given participating centre, Milan's European Institute of Oncology (IEO). Materiel and methods: the chosen clinical application is that of 111In-pentetreotide (OctreoscanTM). Pharmacokinetic data from the literature are used to derive a compartmental model. The kinetic rates between 6 compartments (liver, spleen, kidneys, blood, urine, remainder body) were obtained from WinSaam [3]: the activity in each compartment is known at any time point. The TestDose [1] software (computing architecture of DosiTest) implements the NURBS-based phantom NCAT-WB [2] to generate anatomical data for the virtual patient. IEO gamma-camera was modelled with GATE [4] v6.2. Scintigraphic images were simulated for each compartment and the resulting projections were weighted by the respective pharmacokinetics for each compartment. The final step consisted in aggregating each compartment to generate the resulting image. Results: following IEO's imaging protocol, planar and tomographic image simulations were generated at various time points. Computation times (on a 480 virtual cores computing cluster) for 'step and shoot' whole body simulations (5 steps/time point) and acceptable statistics were: 10 days for extra-vascular fluid, 28 h for blood, 12 h for liver, 7 h for kidneys, and 1-2 h for
Radiative characteristics of depleted uranium bomb and it is protection
International Nuclear Information System (INIS)
Based on the developing process of depleted uranium bombs described in the first part, the radiative characteristics and mechanism of depleted uranium bombs are analyzed emphatically. The deeper discussion on protection of depleted uranium bombs is proceeded
Charek, Daniel B; Meyer, Gregory J; Mihura, Joni L
2016-10-01
We investigated the impact of ego depletion on selected Rorschach cognitive processing variables and self-reported affect states. Research indicates acts of effortful self-regulation transiently deplete a finite pool of cognitive resources, impairing performance on subsequent tasks requiring self-regulation. We predicted that relative to controls, ego-depleted participants' Rorschach protocols would have more spontaneous reactivity to color, less cognitive sophistication, and more frequent logical lapses in visualization, whereas self-reports would reflect greater fatigue and less attentiveness. The hypotheses were partially supported; despite a surprising absence of self-reported differences, ego-depleted participants had Rorschach protocols with lower scores on two variables indicative of sophisticated combinatory thinking, as well as higher levels of color receptivity; they also had lower scores on a composite variable computed across all hypothesized markers of complexity. In addition, self-reported achievement striving moderated the effect of the experimental manipulation on color receptivity, and in the Depletion condition it was associated with greater attentiveness to the tasks, more color reactivity, and less global synthetic processing. Results are discussed with an emphasis on the response process, methodological limitations and strengths, implications for calculating refined Rorschach scores, and the value of using multiple methods in research and experimental paradigms to validate assessment measures. PMID:26002059
International Nuclear Information System (INIS)
This paper presents an unstructured mesh based multi-physics interface implemented in the Serpent 2 Monte Carlo code, for the purpose of coupling the neutronics solution to component-scale thermal hydraulics calculations, such as computational fluid dynamics (CFD). The work continues the development of a multi-physics coupling scheme, which relies on the separation of state-point information from the geometry input, and the capability to handle temperature and density distributions by a rejection sampling algorithm. The new interface type is demonstrated by a simplified molten-salt reactor test case, using a thermal hydraulics solution provided by the CFD solver in OpenFOAM. (author)
Sampling-Based Nuclear Data Uncertainty Quantification for Continuous Energy Monte Carlo Codes
Zhu, Ting
2015-01-01
The goal of the present PhD research is to establish a methodology of nuclear data uncertainty quantification (NDUQ) for MCNPX, the continuous-energy Monte-Carlo (M-C) code. The high fidelity (continuous-energy treatment and flexible geometry modelling) of MCNPX makes it the choice of routine criticality safety calculations at PSI/LRS, but also raises challenges for NDUQ by conventional sensitivity/uncertainty (S/U) methods. The methodology developed during this PhD research is fundamentally ...
Monte Carlo calculations for design of An accelerator based PGNAA facility
International Nuclear Information System (INIS)
Monte Carlo calculations were carried out for design of a set up for Prompt Gamma Ray Neutron Activation Analysis (PGNAA) by 14 MeV neutrons to analyze cement raw material samples. The calculations were carried out using code the MCNP4B2. Various geometry parameters of the PGNAA experimental setup such as sample thickness, moderator geometry and detector shielding etc were optimized by maximizing the prompt gamma ray yield of different elements of sample material. Finally calibration curve of the PGNAA setup were generated for various concentrations of calcium in the material sample. Results of this simulation are presented. (author)
Monte Carlo calculations for design of An accelerator based PGNAA facility
Energy Technology Data Exchange (ETDEWEB)
Nagadi, M.M.; Naqvi, A.A. [King Fahd University of Petroleum and Minerals, Center for Applied Physical Sciences, Dhahran (Saudi Arabia); Rehman, Khateeb-ur; Kidwai, S. [King Fahd University of Petroleum and Minerals, Department of Physics, Dhahran (Saudi Arabia)
2002-08-01
Monte Carlo calculations were carried out for design of a set up for Prompt Gamma Ray Neutron Activation Analysis (PGNAA) by 14 MeV neutrons to analyze cement raw material samples. The calculations were carried out using code the MCNP4B2. Various geometry parameters of the PGNAA experimental setup such as sample thickness, moderator geometry and detector shielding etc were optimized by maximizing the prompt gamma ray yield of different elements of sample material. Finally calibration curve of the PGNAA setup were generated for various concentrations of calcium in the material sample. Results of this simulation are presented. (author)
Microlens assembly error analysis for light field camera based on Monte Carlo method
Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping
2016-08-01
This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.
Random vibration analysis of switching apparatus based on Monte Carlo method
Institute of Scientific and Technical Information of China (English)
ZHAI Guo-fu; CHEN Ying-hua; REN Wan-bin
2007-01-01
The performance in vibration environment of switching apparatus containing mechanical contact is an important element when judging the apparatus's reliability. A piecewise linear two-degrees-of-freedom mathematical model considering contact loss was built in this work, and the vibration performance of the model under random external Gaussian white noise excitation was investigated by using Monte Carlo simulation in Matlab/Simulink. Simulation showed that the spectral content and statistical characters of the contact force coincided strongly with reality. The random vibration character of the contact system was solved using time (numerical) domain simulation in this paper. Conclusions reached here are of great importance for reliability design of switching apparatus.
Energy Technology Data Exchange (ETDEWEB)
Al-Subeihi, Ala' A.A., E-mail: subeihi@yahoo.com [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority (ASEZA), P. O. Box 2565, Aqaba 77110 (Jordan); Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Bladeren, Peter J. van [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Nestec S.A., Avenue Nestlé 55, 1800 Vevey (Switzerland); Rietjens, Ivonne M.C.M.; Punt, Ans [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands)
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling
International Nuclear Information System (INIS)
The numerical simulation of the dynamics of fast ions coming from neutral beam injection (NBI) heating is an important task in fusion devices, since these particles are used as sources to heat and fuel the plasma and their uncontrolled losses can damage the walls of the reactor. This paper shows a new application that simulates these dynamics on the grid: FastDEP. FastDEP plugs together two Monte Carlo codes used in fusion science, namely FAFNER2 and ISDEP, and add new functionalities. Physically, FAFNER2 provides the fast ion initial state in the device while ISDEP calculates their evolution in time; as a result, the fast ion distribution function in TJ-II stellerator has been estimated, but the code can be used on any other device. In this paper a comparison between the physics of the two NBI injectors in TJ-II is presented, together with the differences between fast ion confinement and the driven momentum in the two cases. The simulations have been obtained using Montera, a framework developed for achieving grid efficient executions of Monte Carlo applications. (paper)
A Monte Carlo and continuum study of mechanical properties of nanoparticle based films
Energy Technology Data Exchange (ETDEWEB)
Ogunsola, Oluwatosin; Ehrman, Sheryl [University of Maryland, Department of Chemical and Biomolecular Engineering, Chemical and Nuclear Engineering Building (United States)], E-mail: sehrman@eng.umd.edu
2008-01-15
A combination Monte Carlo and equivalent-continuum simulation approach was used to investigate the structure-mechanical property relationships of titania nanoparticle deposits. Films of titania composed of nanoparticle aggregates were simulated using a Monte Carlo approach with diffusion-limited aggregation. Each aggregate in the simulation is fractal-like and random in structure. In the film structure, it is assumed that bond strength is a function of distance with two limiting values for the bond strengths: one representing the strong chemical bond between the particles at closest proximity in the aggregate and the other representing the weak van der Waals bond between particles from different aggregates. The Young's modulus of the film is estimated using an equivalent-continuum modeling approach, and the influences of particle diameter (5-100 nm) and aggregate size (3-400 particles per aggregate) on predicted Young's modulus are investigated. The Young's modulus is observed to increase with a decrease in primary particle size and is independent of the size of the aggregates deposited. Decreasing porosity resulted in an increase in Young's modulus as expected from results reported previously in the literature.
Energy Technology Data Exchange (ETDEWEB)
Burke, TImothy P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Martin, William R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
International Nuclear Information System (INIS)
Over the years, various types of tritium-in-air monitors have been designed and developed based on different principles. Ionization chamber, proportional counter and scintillation detector systems are few among them. A plastic scintillator based, flow-cell type online tritium-in-air monitoring system was developed for online monitoring of tritium in air. The value of the scintillator mass inside the cell-volume, which maximizes the response of the detector system, should be obtained to get maximum efficiency. The present study is aimed to optimize the amount of mass of the plastic scintillator film for the flow-cell based tritium monitoring instrument so that maximum efficiency is achieved. The Monte Carlo based EGSnrc code system has been used for this purpose
International Nuclear Information System (INIS)
We report on the development of a Monte Carlo application, based on the GEANT4 toolkit, for the characterization and optimization of electron beams for clinical applications produced by a laser-driven plasma source. The GEANT4 application is conceived so as to represent in the most general way the physical and geometrical features of a typical laser-driven accelerator. It is designed to provide standard dosimetric figures such as percentage dose depth curves, two-dimensional dose distributions and 3D dose profiles at different positions both inside and outside the interaction chamber. The application was validated by comparing its predictions to experimental measurements carried out on a real laser-driven accelerator. The work is aimed at optimizing the source, by using this novel application, for radiobiological studies and, in perspective, for medical applications. - Highlights: • Development of a Monte Carlo application based on GEANT4 toolkit. • Experimental measurements carried out with a laser-driven acceleration system. • Validation of Geant4 application comparing experimental data with the simulated ones. • Dosimetric characterization of the acceleration system
Energy Technology Data Exchange (ETDEWEB)
Lamia, D., E-mail: debora.lamia@ibfm.cnr.it [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Russo, G., E-mail: giorgio.russo@ibfm.cnr.it [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Casarino, C.; Gagliano, L.; Candiano, G.C. [Institute of Molecular Bioimaging and Physiology IBFM CNR – LATO, Cefalù (Italy); Labate, L. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); National Institute for Nuclear Physics INFN, Pisa Section and Frascati National Laboratories LNF (Italy); Baffigi, F.; Fulgentini, L.; Giulietti, A.; Koester, P.; Palla, D. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); Gizzi, L.A. [Intense Laser Irradiation Laboratory (ILIL) – National Institute of Optics INO CNR, Pisa (Italy); National Institute for Nuclear Physics INFN, Pisa Section and Frascati National Laboratories LNF (Italy); Gilardi, M.C. [Institute of Molecular Bioimaging and Physiology IBFM CNR, Segrate (Italy); University of Milano-Bicocca, Milano (Italy)
2015-06-21
We report on the development of a Monte Carlo application, based on the GEANT4 toolkit, for the characterization and optimization of electron beams for clinical applications produced by a laser-driven plasma source. The GEANT4 application is conceived so as to represent in the most general way the physical and geometrical features of a typical laser-driven accelerator. It is designed to provide standard dosimetric figures such as percentage dose depth curves, two-dimensional dose distributions and 3D dose profiles at different positions both inside and outside the interaction chamber. The application was validated by comparing its predictions to experimental measurements carried out on a real laser-driven accelerator. The work is aimed at optimizing the source, by using this novel application, for radiobiological studies and, in perspective, for medical applications. - Highlights: • Development of a Monte Carlo application based on GEANT4 toolkit. • Experimental measurements carried out with a laser-driven acceleration system. • Validation of Geant4 application comparing experimental data with the simulated ones. • Dosimetric characterization of the acceleration system.
A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation
International Nuclear Information System (INIS)
Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)
Monte Carlo simulation of primary reactions on HPLUS based on pluto event generator
International Nuclear Information System (INIS)
Hadron Physics Lanzhou Spectrometer (HPLUS) is designed for the study of hadron production and decay from nucleon-nucleon interaction in the GeV region. The current formation of HPLUS and the particle identification methods for three polar angle regions are discussed. The Pluto event generator is applied to simulate the primary reactions on HPLUS, concerning four issues as followed: the agreement on pp elastic scattering angular distribution between Pluto samples and experimental data; the acceptance of charged K mesons in the strangeness production channels for the forward region of HPLUS; the dependence of the maximum energy of photons and the minimum vertex angle of two photons on the polar angle; the influence on the mass spectrum of excited states of nucleon with large resonant width from different reconstruction methods. It is proved that the Pluto event generator satisfies the requirements of Monte Carlo simulation for HPLUS. (authors)
MONTE: An automated Monte Carlo based approach to nuclear magnetic resonance assignment of proteins
Energy Technology Data Exchange (ETDEWEB)
Hitchens, T. Kevin; Lukin, Jonathan A.; Zhan Yiping; McCallum, Scott A.; Rule, Gordon S. [Carnegie Mellon University, Department of Biological Sciences (United States)], E-mail: rule@andrew.cmu.edu
2003-01-15
A general-purpose Monte Carlo assignment program has been developed to aid in the assignment of NMR resonances from proteins. By virtue of its flexible data requirements the program is capable of obtaining assignments of both heavily deuterated and fully protonated proteins. A wide variety of source data, such as inter-residue scalar connectivity, inter-residue dipolar (NOE) connectivity, and residue specific information, can be utilized in the assignment process. The program can also use known assignments from one form of a protein to facilitate the assignment of another form of the protein. This attribute is useful for assigning protein-ligand complexes when the assignments of the unliganded protein are known. The program can be also be used as an interactive research tool to assist in the choice of additional experimental data to facilitate completion of assignments. The assignment of a deuterated 45 kDa homodimeric Glutathione-S-transferase illustrates the principal features of the program.
Web-Based Parallel Monte Carlo Simulation Platform for Financial Computation
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Using Java, Java-enabled Web and object-oriented programming technologies, a framework is designed to organize multicomputer system on Intranet quickly to complete Monte Carlo simulation parallelizing. The high-performance computing environment is embedded in Web server so it can be accessed more easily. Adaptive parallelism and eager scheduling algorithm are used to realize load balancing, parallel processing and system fault-tolerance. Independent sequence pseudo-random number generator schemes to keep the parallel simulation availability. Three kinds of stock option pricing models as instances, ideal speedup and pricing results obtained on test bed. Now, as a Web service, a high-performance financial derivative security-pricing platform is set up for training and studying. The framework can also be used to develop other SPMD (single procedure multiple data) application. Robustness is still a major problem for further research.
GPU-based Monte Carlo dust radiative transfer scheme applied to AGN
Heymann, Frank
2012-01-01
A three dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons (PAH). Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting procedure and at high spatial resolution by a vectorized ray-tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust...
PC-based process distribution to solve iterative Monte Carlo simulations in physical dosimetry
International Nuclear Information System (INIS)
A distribution model to simulate physical dosimetry measurements with Monte Carlo (MC) techniques has been developed. This approach is indicated to solve the simulations where there are continuous changes of measurement conditions (and hence of the input parameters) such as a TPR curve or the estimation of the resolution limit of an optimal densitometer in the case of small field profiles. As a comparison, a high resolution scan for narrow beams with no iterative process is presented. The model has been installed on a network PCs without any resident software. The only requirement for these PCs has been a small and temporal Linux partition in the hard disks and to be connecting by the net with our server PC. (orig.)
DEPLETED URANIUM TECHNICAL WORK
The Depleted Uranium Technical Work is designed to convey available information and knowledge about depleted uranium to EPA Remedial Project Managers, On-Scene Coordinators, contractors, and other Agency managers involved with the remediation of sites contaminated with this mater...
Evaluation of Monte Carlo-based calibrations of HPGe detectors for in situ gamma-ray spectrometry.
Boson, Jonas; Plamboeck, Agneta H; Ramebäck, Henrik; Agren, Göran; Johansson, Lennart
2009-11-01
The aim of this work was to evaluate the use of Monte Carlo-based calibrations for in situ gamma-ray spectrometry. We have performed in situ measurements at five different sites in Sweden using HPGe detectors to determine ground deposition activity levels of (137)Cs from the 1986 Chernobyl accident. Monte Carlo-calculated efficiency calibration factors were compared with corresponding values calculated using a more traditional semi-empirical method. In addition, results for the activity ground deposition were also compared with activity densities found in soil samples. In order to facilitate meaningful comparisons between the different types of results, the combined standard uncertainty of in situ measurements was assessed for both calibration methods. Good agreement, both between the two calibration methods, and between in situ measurements and soil samples, was found at all five sites. Uncertainties in in situ measurements for the given measurement conditions, about 20 years after the fallout occurred, were found to be in the range 15-20% (with a coverage factor k=1, i.e. with a confidence interval of about 68%). PMID:19604609
Shi, Ming; Saint-Martin, Jérôme; Bournel, Arnaud; Maher, Hassan; Renvoise, Michel; Dollfus, Philippe
2010-11-01
High-mobility III-V heterostructures are emerging and very promising materials likely to fulfil high-speed and low-power specifications for ambient intelligent applications. The main objective of this work is to theoretically explore the potentialities of MOSFET based on III-V materials with low bandgap and high electron mobility. First, the charge control is studied in III-V MOS structures using a Schrödinger-Poisson solver. Electronic transport in III-V devices is then analyzed using a particle Monte Carlo device simulator. The external access resistances used in the calculations are carefully calibrated on experimental results. The performance of different structures of nanoscale MOS transistor based on III-V materials is evaluated and the quasi-ballistic character of electron transport is compared to that in Si transistors of same gate length. PMID:21137856
Monte Carlo based water/medium stopping-power ratios for various ICRP and ICRU tissues
International Nuclear Information System (INIS)
Water/medium stopping-power ratios, sw,m, have been calculated for several ICRP and ICRU tissues, namely adipose tissue, brain, cortical bone, liver, lung (deflated and inflated) and spongiosa. The considered clinical beams were 6 and 18 MV x-rays and the field size was 10 x 10 cm2. Fluence distributions were scored at a depth of 10 cm using the Monte Carlo code PENELOPE. The collision stopping powers for the studied tissues were evaluated employing the formalism of ICRU Report 37 (1984 Stopping Powers for Electrons and Positrons (Bethesda, MD: ICRU)). The Bragg-Gray values of sw,m calculated with these ingredients range from about 0.98 (adipose tissue) to nearly 1.14 (cortical bone), displaying a rather small variation with beam quality. Excellent agreement, to within 0.1%, is found with stopping-power ratios reported by Siebers et al (2000a Phys. Med. Biol. 45 983-95) for cortical bone, inflated lung and spongiosa. In the case of cortical bone, sw,m changes approximately 2% when either ICRP or ICRU compositions are adopted, whereas the stopping-power ratios of lung, brain and adipose tissue are less sensitive to the selected composition. The mass density of lung also influences the calculated values of sw,m, reducing them by around 1% (6 MV) and 2% (18 MV) when going from deflated to inflated lung
Institute of Scientific and Technical Information of China (English)
ZHANG Jun; GUO Fan
2015-01-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system’s dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system’s dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
Markov chain Monte Carlo based analysis of post-translationally modified VDAC gating kinetics.
Tewari, Shivendra G; Zhou, Yifan; Otto, Bradley J; Dash, Ranjan K; Kwok, Wai-Meng; Beard, Daniel A
2014-01-01
The voltage-dependent anion channel (VDAC) is the main conduit for permeation of solutes (including nucleotides and metabolites) of up to 5 kDa across the mitochondrial outer membrane (MOM). Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs). Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated, and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC) method. This developed method describes three distinct conducting states (open, half-open, and closed) of VDAC activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggest that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance. PMID:25628567
Dosimetric investigation of proton therapy on CT-based patient data using Monte Carlo simulation
Chongsan, T.; Liamsuwan, T.; Tangboonduangjit, P.
2016-03-01
The aim of radiotherapy is to deliver high radiation dose to the tumor with low radiation dose to healthy tissues. Protons have Bragg peaks that give high radiation dose to the tumor but low exit dose or dose tail. Therefore, proton therapy is promising for treating deep- seated tumors and tumors locating close to organs at risk. Moreover, the physical characteristic of protons is suitable for treating cancer in pediatric patients. This work developed a computational platform for calculating proton dose distribution using the Monte Carlo (MC) technique and patient's anatomical data. The studied case is a pediatric patient with a primary brain tumor. PHITS will be used for MC simulation. Therefore, patient-specific CT-DICOM files were converted to the PHITS input. A MATLAB optimization program was developed to create a beam delivery control file for this study. The optimization program requires the proton beam data. All these data were calculated in this work using analytical formulas and the calculation accuracy was tested, before the beam delivery control file is used for MC simulation. This study will be useful for researchers aiming to investigate proton dose distribution in patients but do not have access to proton therapy machines.
Monte Carlo based unit commitment procedures for the deregulated market environment
International Nuclear Information System (INIS)
The unit commitment problem, originally conceived in the framework of short term operation of vertically integrated utilities, needs a thorough re-examination in the light of the ongoing transition towards the open electricity market environment. In this work the problem is re-formulated to adapt unit commitment to the viewpoint of a generation company (GENCO) which is no longer bound to satisfy its load, but is willing to maximize its profits. Moreover, with reference to the present day situation in many countries, the presence of a GENCO (the former monopolist) which is in the position of exerting the market power, requires a careful analysis to be carried out considering the different perspectives of a price taker and of the price maker GENCO. Unit commitment is thus shown to lead to a couple of distinct, yet slightly different problems. The unavoidable uncertainties in load profile and price behaviour over the time period of interest are also taken into account by means of a Monte Carlo simulation. Both the forecasted loads and prices are handled as random variables with a normal multivariate distribution. The correlation between the random input variables corresponding to successive hours of the day was considered by carrying out a statistical analysis of actual load and price data. The whole procedure was tested making use of reasonable approximations of the actual data of the thermal generation units available to come actual GENCOs operating in Italy. (author)
Energy Technology Data Exchange (ETDEWEB)
Abdel-Khalik, Hany S. [North Carolina State Univ., Raleigh, NC (United States); Zhang, Qiong [North Carolina State Univ., Raleigh, NC (United States)
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
Markov chain Monte Carlo based analysis of post-translationally modified VDAC1 gating kinetics
Directory of Open Access Journals (Sweden)
Shivendra eTewari
2015-01-01
Full Text Available The voltage-dependent anion channel (VDAC is the main conduit for permeation of solutes (including nucleotides and metabolites of up to 5 kDa across the mitochondrial outer membrane (MOM. Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs. Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC method. This developed method describes three distinct conducting states (open, half-open, and closed of VDAC1 activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggests that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance.
Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas
2003-01-01
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
Monte Carlo simulation of charge mediated magnetoelectricity in multiferroic bilayers
Energy Technology Data Exchange (ETDEWEB)
Ortiz-Álvarez, H.H. [Universidad de Caldas, Manizales (Colombia); Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia); Bedoya-Hincapié, C.M. [Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia); Universidad Santo Tomás, Bogotá (Colombia); Restrepo-Parra, E., E-mail: erestrepopa@unal.edu.co [Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia)
2014-12-01
Simulations of a bilayer ferroelectric/ferromagnetic multiferroic system were carried out, based on the Monte Carlo method and Metropolis dynamics. A generic model was implemented with a Janssen-like Hamiltonian, taking into account magnetoelectric interactions due to charge accumulation at the interface. Two different magnetic exchange constants were considered for accumulation and depletion states. Several screening lengths were also included. Simulations exhibit considerable magnetoelectric effects not only at low temperature, but also at temperature near to the transition point of the ferromagnetic layer. The results match experimental observations for this kind of structure and mechanism.
Li, Lanting; Wu, Runqing; Yan, Guoquan; Gao, Mingxia; Deng, Chunhui; Zhang, Xiangmin
2016-01-01
A novel method to isolate global N-termini using sulfydryl tagging and gold-nanoparticle-based depletion (STagAu method) is presented. The N-terminal and lysine amino groups were first completely dimethylated at the protein level, after which the proteins were digested. The newly generated internal peptides were tagged with sulfydryl by Traut's reagent through digested N-terminal amines in yields of 96%. The resulting sulfydryl peptides were depleted through binding onto nano gold composite materials. The Au-S bond is stable and widely used in materials science. Nano gold composite materials showed nearly complete depletion of sulfydryl peptides. A set of the acetylated and dimethylated N-terminal peptides were analyzed by liquid chromatography-tandem mass spectrometry. This method was demonstrated to be an efficient N-terminus enrichment method because of the use of an effective derivatization reaction, in combination with robust and relative easy to implement Au-S coupling. We identified 632 N-terminal peptides from 386 proteins in a mouse liver sample. The STagAu approach presented is therefore a facile and efficient method for mass-spectrometry-based analysis of proteome N-termini or protease-generated cleavage products.
Saha, Sudip K; Guchhait, Asim; Pal, Amlan J
2014-03-01
We report the formation and characterization of hybrid pn-junction solar cells based on a layer of copper diffused silver indium disulfide (AgInS2@Cu) nanoparticles and another layer of copper phthalocyanine (CuPc) molecules. With copper diffusion in the nanocrystals, their optical absorption and hence the activity of the hybrid pn-junction solar cells was extended towards the near-IR region. To decrease the particle-to-particle separation for improved carrier transport through the inorganic layer, we replaced the long-chain ligands of copper-diffused nanocrystals in each monolayer with short-ones. Under illumination, the hybrid pn-junctions yielded a higher short-circuit current as compared to the combined contribution of the Schottky junctions based on the components. A wider depletion region at the interface between the two active layers in the pn-junction device as compared to that of the Schottky junctions has been considered to analyze the results. Capacitance-voltage characteristics under a dark condition supported such a hypothesis. We also determined the width of the depletion region in the two layers separately so that a pn-junction could be formed with a tailored thickness of the two materials. Such a "fully-depleted" device resulted in an improved photovoltaic performance, primarily due to lessening of the internal resistance of the hybrid pn-junction solar cells.
Hennessy, Ricky; Lim, Sam L; Markey, Mia K; Tunnell, James W
2013-03-01
We present a Monte Carlo lookup table (MCLUT)-based inverse model for extracting optical properties from tissue-simulating phantoms. This model is valid for close source-detector separation and highly absorbing tissues. The MCLUT is based entirely on Monte Carlo simulation, which was implemented using a graphics processing unit. We used tissue-simulating phantoms to determine the accuracy of the MCLUT inverse model. Our results show strong agreement between extracted and expected optical properties, with errors rate of 1.74% for extracted reduced scattering values, 0.74% for extracted absorption values, and 2.42% for extracted hemoglobin concentration values. PMID:23455965
Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H
2001-01-01
Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.
Albin, T.; Koschny, D.; Soja, R.; Srama, R.; Poppe, B.
2016-01-01
The Canary Islands Long-Baseline Observatory (CILBO) is a double station meteor camera system (Koschny et al., 2013; Koschny et al., 2014) that consists of 5 cameras. The two cameras considered in this report are ICC7 and ICC9, and are installed on Tenerife and La Palma. They point to the same atmospheric volume between both islands allowing stereoscopic observation of meteors. Since its installation in 2011 and the start of operation in 2012 CILBO has detected over 15000 simultaneously observed meteors. Koschny and Diaz (2002) developed the Meteor Orbit and Trajectory Software (MOTS) to compute the trajectory of such meteors. The software uses the astrometric data from the detection software MetRec (Molau, 1998) and determines the trajectory in geodetic coordinates. This work presents a Monte-Carlo based extension of the MOTS code to compute the orbital elements of simultaneously detected meteors by CILBO.
Ye, Hong-zhou; Jiang, Hong
2014-01-01
Materials with spin-crossover (SCO) properties hold great potentials in information storage and therefore have received a lot of concerns in the recent decades. The hysteresis phenomena accompanying SCO is attributed to the intermolecular cooperativity whose underlying mechanism may have a vibronic origin. In this work, a new vibronic Ising-like model in which the elastic coupling between SCO centers is included by considering harmonic stretching and bending (SAB) interactions is proposed and solved by Monte Carlo simulations. The key parameters in the new model, $k_1$ and $k_2$, corresponding to the elastic constant of the stretching and bending mode, respectively, can be directly related to the macroscopic bulk and shear modulus of the material in study, which can be readily estimated either based on experimental measurements or first-principles calculations. The convergence issue in the MC simulations of the thermal hysteresis has been carefully checked, and it was found that the stable hysteresis loop can...
Monte Carlo based approach to the LS–NaI 4πβ–γ anticoincidence extrapolation and uncertainty.
Fitzgerald, R
2016-03-01
The 4πβ–γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone.
Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.
2007-07-01
The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.
Monte Carlo based approach to the LS–NaI 4πβ–γ anticoincidence extrapolation and uncertainty.
Fitzgerald, R
2016-03-01
The 4πβ–γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone. PMID:27358944
Tseung, H Wan Chan; Beltran, C
2014-01-01
Purpose: Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on GPUs. However, these usually use simplified models for non-elastic (NE) proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and NE collisions. Methods: Using CUDA, we implemented GPU kernels for these tasks: (1) Simulation of spots from our scanning nozzle configurations, (2) Proton propagation through CT geometry, considering nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) Modeling of the intranuclear cascade stage of NE interactions, (4) Nuclear evaporation simulation, and (5) Statistical error estimates on the dose. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions, (2) Dose calculations in homogeneous phantoms, (3) Re-calculations of head and neck plans from a commercial treatment planning system (TPS), and compared with Geant4.9.6p2/TOPAS. Results: Yields, en...
TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations
Energy Technology Data Exchange (ETDEWEB)
Schuemann, J; Grassberger, C; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Dowdell, S [Illawarra Shoalhaven Local Health District, Wollongong (Australia)
2014-06-15
Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend
GPU-based Monte Carlo Dust Radiative Transfer Scheme Applied to Active Galactic Nuclei
Heymann, Frank; Siebenmorgen, Ralf
2012-05-01
A three-dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing-time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons. Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting procedure and at high spatial resolution by a vectorized ray tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust temperatures of one- and two-dimensional benchmarks are reproduced at high precision. The parallelization capability of various MC algorithms is analyzed and included in our treatment. We utilize the Lucy algorithm for the optical thin case where the Poisson noise is high, the iteration-free Bjorkman & Wood method to reduce the calculation time, and the Fleck & Canfield diffusion approximation for extreme optical thick cells. The code is applied to model the appearance of active galactic nuclei (AGNs) at optical and infrared wavelengths. The AGN torus is clumpy and includes fluffy composite grains of various sizes made up of silicates and carbon. The dependence of the SED on the number of clumps in the torus and the viewing angle is studied. The appearance of the 10 μm silicate features in absorption or emission is discussed. The SED of the radio-loud quasar 3C 249.1 is fit by the AGN model and a cirrus component to account for the far-infrared emission.
International Nuclear Information System (INIS)
The paper considers radiological and toxic impact of the depleted uranium on the human health. Radiological influence of depleted uranium is less for 60 % than natural uranium due to the decreasing of short-lived isotopes uranium-234 and uranium-235 after enrichment. The formation of radioactive aerosols and their impact on the human are mentioned. Use of the depleted uranium weapons has also a chemical effect on intake due to possible carcinogenic influence on kidney. Uranium-236 in the substance of the depleted uranium is determined. The fact of beta-radiation formation in the uranium-238 decay is regarded. This effect practically is the same for both depleted and natural uranium. Importance of toxicity of depleted uranium, as the heavier chemical substance, has a considerable contribution to the population health. The paper analyzes risks regarding the use of the depleted uranium weapons. There is international opposition against using weapons with depleted uranium. Resolution on effects of the use of armaments and ammunitions containing depleted uranium was five times supported by the United Nations (USA, United Kingdom, France and Israel did not support). The decision for banning of depleted uranium weapons was supported by the European Parliament
Skrzyński, Witold
2014-11-01
The aim of this work was to create a model of a wide-bore Siemens Somatom Sensation Open CT scanner for use with GMCTdospp, which is an EGSnrc-based software tool dedicated for Monte Carlo calculations of dose in CT examinations. The method was based on matching spectrum and filtration to half value layer and dose profile, and thus was similar to the method of Turner et al. (Med. Phys. 36, pp. 2154-2164). Input data on unfiltered beam spectra were taken from two sources: the TASMIP model and IPEM Report 78. Two sources of HVL data were also used, namely measurements and documentation. Dose profile along the fan-beam was measured with Gafchromic RTQA-1010 (QA+) film. Two-component model of filtration was assumed: bow-tie filter made of aluminum with 0.5 mm thickness on central axis, and flat filter made of one of four materials: aluminum, graphite, lead, or titanium. Good agreement between calculations and measurements was obtained for models based on the measured values of HVL. Doses calculated with GMCTdospp differed from the doses measured with pencil ion chamber placed in PMMA phantom by less than 5%, and root mean square difference for four tube potentials and three positions in the phantom did not exceed 2.5%. The differences for models based on HVL values from documentation exceeded 10%. Models based on TASMIP spectra and IPEM78 spectra performed equally well. PMID:25028213
International Nuclear Information System (INIS)
Liquid Salt Cooled Reactors (LSCRs) are high temperature reactors, cooled by liquid salt, with a TRISO-particle based fuel in a solid form (stationary fuel elements or circulating fuel pebbles); this paper is focusing on the former. In either case, due to the double heterogeneity, core physics analysis require different considerations with more complex approaches than LWRs core physics calculations. Additional challenges appear when using the multi-group approach. In this paper we examine the use of SCALE6.1.1. Double heterogeneity may be accounted for through the Dancoff factor, however, SCALE6.1.1 does not provide an automated method to calculate Dancoff Factors for fuel planks with TRISO fuel particles. Therefore, depletion with continuous energy Monte Carlo Transport (CE depletion) in SCALE6.2 beta was used to generate MC Dancoff factors for multi-group calculations. MCDancoff corrected multi-group depletion agrees with the results for CE depletion within ±100 pcm, and within ±2σ. Producing MCDancoff factors for multi-group (MG) depletion calculations is necessary to LSCR analysis because CE depletion runtime and memory requirements are prohibitive for routine use. MG depletion with MCDancoff provides significantly shorter runtime and lower memory requirements while providing results of acceptable accuracy. (author)
Monte Carlo-based diode design for correction-less small field dosimetry
International Nuclear Information System (INIS)
Due to their small collecting volume, diodes are commonly used in small field dosimetry. However, the relative sensitivity of a diode increases with decreasing small field size. Conversely, small air gaps have been shown to cause a significant decrease in the sensitivity of a detector as the field size is decreased. Therefore, this study uses Monte Carlo simulations to look at introducing air upstream to diodes such that they measure with a constant sensitivity across all field sizes in small field dosimetry. Varying thicknesses of air were introduced onto the upstream end of two commercial diodes (PTW 60016 photon diode and PTW 60017 electron diode), as well as a theoretical unenclosed silicon chip using field sizes as small as 5 mm × 5 mm. The metric used in this study represents the ratio of the dose to a point of water to the dose to the diode active volume, for a particular field size and location. The optimal thickness of air required to provide a constant sensitivity across all small field sizes was found by plotting as a function of introduced air gap size for various field sizes, and finding the intersection point of these plots. That is, the point at whichwas constant for all field sizes was found. The optimal thickness of air was calculated to be 3.3, 1.15 and 0.10 mm for the photon diode, electron diode and unenclosed silicon chip, respectively. The variation in these results was due to the different design of each detector. When calculated with the new diode design incorporating the upstream air gap, kQclin,Qmsrfclin,fmsr was equal to unity to within statistical uncertainty (0.5%) for all three diodes. Cross-axis profile measurements were also improved with the new detector design. The upstream air gap could be implanted on the commercial diodes via a cap consisting of the air cavity surrounded by water equivalent material. The results for the unclosed silicon chip show that an ideal small field dosimetry diode could be created by using a silicon chip
Monte Carlo-based diode design for correction-less small field dosimetry
Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R. T.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.
2013-07-01
Due to their small collecting volume, diodes are commonly used in small field dosimetry. However, the relative sensitivity of a diode increases with decreasing small field size. Conversely, small air gaps have been shown to cause a significant decrease in the sensitivity of a detector as the field size is decreased. Therefore, this study uses Monte Carlo simulations to look at introducing air upstream to diodes such that they measure with a constant sensitivity across all field sizes in small field dosimetry. Varying thicknesses of air were introduced onto the upstream end of two commercial diodes (PTW 60016 photon diode and PTW 60017 electron diode), as well as a theoretical unenclosed silicon chip using field sizes as small as 5 mm × 5 mm. The metric \\frac{{D_{w,Q} }}{{D_{Det,Q} }} used in this study represents the ratio of the dose to a point of water to the dose to the diode active volume, for a particular field size and location. The optimal thickness of air required to provide a constant sensitivity across all small field sizes was found by plotting \\frac{{D_{w,Q} }}{{D_{Det,Q} }} as a function of introduced air gap size for various field sizes, and finding the intersection point of these plots. That is, the point at which \\frac{{D_{w,Q} }}{{D_{Det,Q} }} was constant for all field sizes was found. The optimal thickness of air was calculated to be 3.3, 1.15 and 0.10 mm for the photon diode, electron diode and unenclosed silicon chip, respectively. The variation in these results was due to the different design of each detector. When calculated with the new diode design incorporating the upstream air gap, k_{Q_{clin} ,Q_{msr} }^{f_{clin} ,f_{msr} } was equal to unity to within statistical uncertainty (0.5%) for all three diodes. Cross-axis profile measurements were also improved with the new detector design. The upstream air gap could be implanted on the commercial diodes via a cap consisting of the air cavity surrounded by water equivalent material. The
Monte Carlo based beam model using a photon MLC for modulated electron radiotherapy
International Nuclear Information System (INIS)
Purpose: Modulated electron radiotherapy (MERT) promises sparing of organs at risk for certain tumor sites. Any implementation of MERT treatment planning requires an accurate beam model. The aim of this work is the development of a beam model which reconstructs electron fields shaped using the Millennium photon multileaf collimator (MLC) (Varian Medical Systems, Inc., Palo Alto, CA) for a Varian linear accelerator (linac). Methods: This beam model is divided into an analytical part (two photon and two electron sources) and a Monte Carlo (MC) transport through the MLC. For dose calculation purposes the beam model has been coupled with a macro MC dose calculation algorithm. The commissioning process requires a set of measurements and precalculated MC input. The beam model has been commissioned at a source to surface distance of 70 cm for a Clinac 23EX (Varian Medical Systems, Inc., Palo Alto, CA) and a TrueBeam linac (Varian Medical Systems, Inc., Palo Alto, CA). For validation purposes, measured and calculated depth dose curves and dose profiles are compared for four different MLC shaped electron fields and all available energies. Furthermore, a measured two-dimensional dose distribution for patched segments consisting of three 18 MeV segments, three 12 MeV segments, and a 9 MeV segment is compared with corresponding dose calculations. Finally, measured and calculated two-dimensional dose distributions are compared for a circular segment encompassed with a C-shaped segment. Results: For 15 × 34, 5 × 5, and 2 × 2 cm2 fields differences between water phantom measurements and calculations using the beam model coupled with the macro MC dose calculation algorithm are generally within 2% of the maximal dose value or 2 mm distance to agreement (DTA) for all electron beam energies. For a more complex MLC pattern, differences between measurements and calculations are generally within 3% of the maximal dose value or 3 mm DTA for all electron beam energies. For the two
An OpenCL-based Monte Carlo dose calculation engine (oclMC) for coupled photon-electron transport
Tian, Zhen; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-01-01
Monte Carlo (MC) method has been recognized the most accurate dose calculation method for radiotherapy. However, its extremely long computation time impedes clinical applications. Recently, a lot of efforts have been made to realize fast MC dose calculation on GPUs. Nonetheless, most of the GPU-based MC dose engines were developed in NVidia CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a fast cross-platform MC dose engine oclMC using OpenCL environment for external beam photon and electron radiotherapy in MeV energy range. Coupled photon-electron MC simulation was implemented with analogue simulations for photon transports and a Class II condensed history scheme for electron transports. To test the accuracy and efficiency of our dose engine oclMC, we compared dose calculation results of oclMC and gDPM, our previously developed GPU-based MC code, for a 15 MeV electron ...
Directory of Open Access Journals (Sweden)
Tuija Kangasmaa
2012-01-01
Full Text Available Simultaneous Tl-201/Tc-99m dual isotope myocardial perfusion SPECT is seriously hampered by down-scatter from Tc-99m into the Tl-201 energy window. This paper presents and optimises the ordered-subsets-expectation-maximisation-(OS-EM- based reconstruction algorithm, which corrects the down-scatter using an efficient Monte Carlo (MC simulator. The algorithm starts by first reconstructing the Tc-99m image with attenuation, collimator response, and MC-based scatter correction. The reconstructed Tc-99m image is then used as an input for an efficient MC-based down-scatter simulation of Tc-99m photons into the Tl-201 window. This down-scatter estimate is finally used in the Tl-201 reconstruction to correct the crosstalk between the two isotopes. The mathematical 4D NCAT phantom and physical cardiac phantoms were used to optimise the number of OS-EM iterations where the scatter estimate is updated and the number of MC simulated photons. The results showed that two scatter update iterations and 105 simulated photons are enough for the Tc-99m and Tl-201 reconstructions, whereas 106 simulated photons are needed to generate good quality down-scatter estimates. With these parameters, the entire Tl-201/Tc-99m dual isotope reconstruction can be accomplished in less than 3 minutes.
Kangasmaa, Tuija; Kuikka, Jyrki; Sohlberg, Antti
2012-01-01
Simultaneous Tl-201/Tc-99m dual isotope myocardial perfusion SPECT is seriously hampered by down-scatter from Tc-99m into the Tl-201 energy window. This paper presents and optimises the ordered-subsets-expectation-maximisation-(OS-EM-) based reconstruction algorithm, which corrects the down-scatter using an efficient Monte Carlo (MC) simulator. The algorithm starts by first reconstructing the Tc-99m image with attenuation, collimator response, and MC-based scatter correction. The reconstructed Tc-99m image is then used as an input for an efficient MC-based down-scatter simulation of Tc-99m photons into the Tl-201 window. This down-scatter estimate is finally used in the Tl-201 reconstruction to correct the crosstalk between the two isotopes. The mathematical 4D NCAT phantom and physical cardiac phantoms were used to optimise the number of OS-EM iterations where the scatter estimate is updated and the number of MC simulated photons. The results showed that two scatter update iterations and 10(5) simulated photons are enough for the Tc-99m and Tl-201 reconstructions, whereas 10(6) simulated photons are needed to generate good quality down-scatter estimates. With these parameters, the entire Tl-201/Tc-99m dual isotope reconstruction can be accomplished in less than 3 minutes.
Pan, J.; Durand, M. T.; Vanderjagt, B. J.
2015-12-01
Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.
Nonlinear lower hybrid wave depletion
International Nuclear Information System (INIS)
Two numerical ray tracing codes with focusing are used to compute lower hybrid daughter wave amplification by quasi-mode parametric decay. The first code, LHPUMP provides a numerical pump model on a grid. This model is used by a second code, LHFQM which computes daughter wave amplification inside the pump extent and follows the rays until their energy is absorbed by the plasma. An analytic model is then used to estimate pump depletion based on the numerical results. Results for PLT indicate strong pump depletion at the plasma edge at high density operation for the 800 Mhz wave frequency, but weak depletion for the 2.45 Ghz experiment. This is proposed to be the mechanism responsible for the high density limit for current drive as well as for the difficulty to heat ions
Directory of Open Access Journals (Sweden)
Beverley A. RAYMOND†
2010-08-01
Full Text Available Critical load (CL and exceedance maps of sulphur (S and nitrogen (N for upland soils were generated for the Georgia Basin, British Columbia, Canada, by synthesizing available data layers for atmospheric deposition, climate (precipitation, temperature, soil, site classification and elevation. Critical loads were determined using the steady-state mass-balance model and a criterion based on zero-tolerance for further base-cation depletion. The resulting CL values were generally lowest on ridge tops and increased towards valleys. Critical load exceedance ranged from 13% of the Georgia Basin under wet deposition to 32% under modelled total (wet and dry deposition. Moreover, exceedance increased by an additional 10% when considering upland areas only for the Georgia Basin. Significant portions of the Georgia Basin are predicted to experience exceedance-enhanced base-cation depletion rates above 200 eq ha–1 y–1 and turn-over times to a final new base saturation state within 200 years under continued atmospheric S and N deposition.
Reply to "Comment on 'A study on tetrahedron-based inhomogeneous Monte-Carlo optical simulation'".
Shen, Haiou; Wang, Ge
2011-04-19
We compare the accuracy of TIM-OS and MMCM in response to the recent analysis made by Fang [Biomed. Opt. Express 2, 1258 (2011)]. Our results show that the tetrahedron-based energy deposition algorithm used in TIM-OS is more accurate than the node-based energy deposition algorithm used in MMCM.
International Nuclear Information System (INIS)
In Japan, depleted uranium ammunition is regarded as nuclear weapons and meets with fierce opposition. The fact that US Marines mistakenly fired bullets containing depleted uranium on an island off Okinawa during training exercises in December 1995 and January 1996, also contributes. The overall situation in this area in Japan is outlined. (P.A.)
Simakov, S P; Moellendorff, U V; Schmuck, I; Konobeev, A Y; Korovin, Y A; Pereslavtsev, P
2002-01-01
A newly developed computational procedure is presented for the generation of d-Li source neutrons in Monte Carlo transport calculations based on the use of evaluated double-differential d+ sup 6 sup , sup 7 Li cross section data. A new code M sup c DeLicious was developed as an extension to MCNP4C to enable neutronics design calculations for the d-Li based IFMIF neutron source making use of the evaluated deuteron data files. The M sup c DeLicious code was checked against available experimental data and calculation results of M sup c DeLi and MCNPX, both of which use built-in analytical models for the Li(d, xn) reaction. It is shown that M sup c DeLicious along with newly evaluated d+ sup 6 sup , sup 7 Li data is superior in predicting the characteristics of the d-Li neutron source. As this approach makes use of tabulated Li(d, xn) cross sections, the accuracy of the IFMIF d-Li neutron source term can be steadily improved with more advanced and validated data.
International Nuclear Information System (INIS)
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Directory of Open Access Journals (Sweden)
Qinming Liu
2012-01-01
Full Text Available Health management for a complex nonlinear system is becoming more important for condition-based maintenance and minimizing the related risks and costs over its entire life. However, a complex nonlinear system often operates under dynamically operational and environmental conditions, and it subjects to high levels of uncertainty and unpredictability so that effective methods for online health management are still few now. This paper combines hidden semi-Markov model (HSMM with sequential Monte Carlo (SMC methods. HSMM is used to obtain the transition probabilities among health states and health state durations of a complex nonlinear system, while the SMC method is adopted to decrease the computational and space complexity, and describe the probability relationships between multiple health states and monitored observations of a complex nonlinear system. This paper proposes a novel method of multisteps ahead health recognition based on joint probability distribution for health management of a complex nonlinear system. Moreover, a new online health prognostic method is developed. A real case study is used to demonstrate the implementation and potential applications of the proposed methods for online health management of complex nonlinear systems.
Shypailo, R J; Ellis, K J
2011-05-21
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of (40)K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Shypailo, R. J.; Ellis, K. J.
2011-05-01
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Management of depleted uranium
International Nuclear Information System (INIS)
Large stocks of depleted uranium have arisen as a result of enrichment operations, especially in the United States and the Russian Federation. Countries with depleted uranium stocks are interested in assessing strategies for the use and management of depleted uranium. The choice of strategy depends on several factors, including government and business policy, alternative uses available, the economic value of the material, regulatory aspects and disposal options, and international market developments in the nuclear fuel cycle. This report presents the results of a depleted uranium study conducted by an expert group organised jointly by the OECD Nuclear Energy Agency and the International Atomic Energy Agency. It contains information on current inventories of depleted uranium, potential future arisings, long term management alternatives, peaceful use options and country programmes. In addition, it explores ideas for international collaboration and identifies key issues for governments and policy makers to consider. (authors)
International Nuclear Information System (INIS)
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry
Energy Technology Data Exchange (ETDEWEB)
Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, 31062 Toulouse (France); McKay, Erin [St George Hospital, Gray Street, Kogarah, New South Wales 2217 (Australia); Ferrer, Ludovic [ICO René Gauducheau, Boulevard Jacques Monod, St Herblain 44805 (France); Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila [European Institute of Oncology, Via Ripamonti 435, Milano 20141 (Italy); Bardiès, Manuel [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, Toulouse 31062 (France)
2015-12-15
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry
Water Depletion Threatens Agriculture
Brauman, K. A.; Richter, B. D.; Postel, S.; Floerke, M.; Malsy, M.
2014-12-01
Irrigated agriculture is the human activity that has by far the largest impact on water, constituting 85% of global water consumption and 67% of global water withdrawals. Much of this water use occurs in places where water depletion, the ratio of water consumption to water availability, exceeds 75% for at least one month of the year. Although only 17% of global watershed area experiences depletion at this level or more, nearly 30% of total cropland and 60% of irrigated cropland are found in these depleted watersheds. Staple crops are particularly at risk, with 75% of global irrigated wheat production and 65% of irrigated maize production found in watersheds that are at least seasonally depleted. Of importance to textile production, 75% of cotton production occurs in the same watersheds. For crop production in depleted watersheds, we find that one half to two-thirds of production occurs in watersheds that have not just seasonal but annual water shortages, suggesting that re-distributing water supply over the course of the year cannot be an effective solution to shortage. We explore the degree to which irrigated production in depleted watersheds reflects limitations in supply, a byproduct of the need for irrigation in perennially or seasonally dry landscapes, and identify heavy irrigation consumption that leads to watershed depletion in more humid climates. For watersheds that are not depleted, we evaluate the potential impact of an increase in irrigated production. Finally, we evaluate the benefits of irrigated agriculture in depleted and non-depleted watersheds, quantifying the fraction of irrigated production going to food production, animal feed, and biofuels.
Shafiei, M.; Gharari, S.; Pande, S.; Bhulai, S.
2014-01-01
Posterior sampling methods are increasingly being used to describe parameter and model predictive uncertainty in hydrologic modelling. This paper proposes an alternative to random walk chains (such as DREAM-zs). We propose a sampler based on independence chains with an embedded feature of standardiz
Energy Technology Data Exchange (ETDEWEB)
Moore, Stephen C. [Department of Radiology, Brigham and Women' s Hospital and Harvard Medical School, 75 Francis Street, Boston, MA 02115 (United States)]. E-mail: scmoore@bwh.harvard.edu; Ouyang, Jinsong [Department of Radiology, Brigham and Women' s Hospital and Harvard Medical School, 75 Francis Street, Boston, MA 02115 (United States); Park, Mi-Ae [Department of Radiology, Brigham and Women' s Hospital and Harvard Medical School, 75 Francis Street, Boston, MA 02115 (United States); El Fakhri, Georges [Department of Radiology, Brigham and Women' s Hospital and Harvard Medical School, 75 Francis Street, Boston, MA 02115 (United States)
2006-12-20
We have incorporated Monte Carlo (MC)-based estimates of patient scatter, detector scatter, and crosstalk into an iterative reconstruction algorithm, and compared its performance to that of a general spectral (GS) approach. We extended the MC-based reconstruction algorithm of de Jong et al. by (1) using the 'Delta scattering' method to determine photon interaction points (2) simulating scatter maps for many energy bins simultaneously, and (3) decoupling the simulation of the object and detector by using pre-stored point spread functions (PSF) that included all collimator and detector effects. A numerical phantom was derived from a segmented CT scan of a torso phantom. The relative values of In-111 activity concentration simulated in soft tissue, liver, spine, left lung, right lung, and five spherical tumors (1.3-2.0 cm diam.) were 1.0, 1.5, 1.5, 0.3, 0.5, and 10.0, respectively. GS scatter projections were incorporated additively in an OSEM reconstruction (6 subsetsx10 projectionsx2 photopeak windows). After three iterations, GS scatter projections were replaced by MC-estimated scatter projections for two additional iterations. MC-based compensation was quantitatively compared to GS-based compensation after five iterations. The bias of organ activity estimates ranged from -13% to -6.5% (GS), and from -1.4% to +5.0% (MC); tumor bias ranged from -20.0% to +10.0% for GS (mean{+-}std.dev.=-4.3{+-}11.9%), and from -2.2 to +18.8% for MC (+4.1{+-}8.6%). Image noise in all organs was less with MC than with GS.
Niccolini, G.; Alcolea, J.
Solving the radiative transfer problem is a common problematic to may fields in astrophysics. With the increasing angular resolution of spatial or ground-based telescopes (VLTI, HST) but also with the next decade instruments (NGST, ALMA, ...), astrophysical objects reveal and will certainly reveal complex spatial structures. Consequently, it is necessary to develop numerical tools being able to solve the radiative transfer equation in three dimensions in order to model and interpret these observations. I present a 3D radiative transfer program, using a new method for the construction of an adaptive spatial grid, based on the Monte Claro method. With the help of this tools, one can solve the continuum radiative transfer problem (e.g. a dusty medium), computes the temperature structure of the considered medium and obtain the flux of the object (SED and images).
Meesters, Christian; Pairet, Bruno; Rabenhorst, Anja; Decker, Heinz; Jaenicke, Elmar
2010-06-01
We present a modular, collaborative, open-source architecture for rigid body modelling based upon small angle scattering data, named sas_rigid. It is designed to provide a fast and extensible scripting interface using the easy-to-learn Python programming language. Features include rigid body modelling to result in static structures and three-dimensional probability densities using two different algorithms. PMID:20598639
Energy Technology Data Exchange (ETDEWEB)
Adams, M.B. [United States Dept. of Agriculture Forest Service, Parsons, WV (United States); Burger, J.A. [Virginia Tech University, Blacks Burg, VA (United States)
2010-07-01
This study assessed the hypothesis that soil based cation depletion is an effect of acidic deposition in forests located in the central Appalachians. The effects of experimentally induced base cation depletion were evaluated in relation to long-term soil productivity and the sustainability of forest stands. Whole-tree harvesting was conducted along with the removal of dead wood litter in order to remove all aboveground nutrients. Ammonium sulfate fertilizer was added at annual rates of 40.6 kg S/ha and 35.4 kg N/h in order to increase the leaching of calcium (Ca) and magnesium (Mg) from the soil. A randomized complete block design was used in 4 or 5 treatment applications in a mixed hardwood experimental forest located in West Virginia and in a cherry-maple forest located in a national forest in West Virginia. Soils were sampled over a 10-year period. The study showed that significant changes in soil Mg, N and some other nutrients occurred over time. However, biomass did not differ significantly among the different treatment options used.
Investigation of the CRT performance of a PET scanner based in liquid xenon: A Monte Carlo study
Gomez-Cadenas, J J; Ferrario, P; Monrabal, F; Rodríguez, J; Toledo, J F
2016-01-01
The measurement of the time of flight of the two 511 keV gammas recorded in coincidence in a PET scanner provides an effective way of reducing the random background and therefore increases the scanner sensitivity, provided that the coincidence resolving time (CRT) of the gammas is sufficiently good. Existing commercial systems based in LYSO crystals, such as the GEMINIS of Philips, reach CRT values of ~ 600 ps (FWHM). In this paper we present a Monte Carlo investigation of the CRT performance of a PET scanner exploiting the scintillating properties of liquid xenon. We find that an excellent CRT of 60-70 ps (depending on the PDE of the sensor) can be obtained if the scanner is instrumented with silicon photomultipliers (SiPMs) sensitive to the ultraviolet light emitted by xenon. Alternatively, a CRT of 120 ps can be obtained instrumenting the scanner with (much cheaper) blue-sensitive SiPMs coated with a suitable wavelength shifter. These results show the excellent time of flight capabilities of a PET device b...
Energy Technology Data Exchange (ETDEWEB)
Brunetti, Antonio; Golosio, Bruno [Universita degli Studi di Sassari, Dipartimento di Scienze Politiche, Scienze della Comunicazione e Ingegneria dell' Informazione, Sassari (Italy); Melis, Maria Grazia [Universita degli Studi di Sassari, Dipartimento di Storia, Scienze dell' Uomo e della Formazione, Sassari (Italy); Mura, Stefania [Universita degli Studi di Sassari, Dipartimento di Agraria e Nucleo di Ricerca sulla Desertificazione, Sassari (Italy)
2014-11-08
X-ray fluorescence (XRF) is a well known nondestructive technique. It is also applied to multilayer characterization, due to its possibility of estimating both composition and thickness of the layers. Several kinds of cultural heritage samples can be considered as a complex multilayer, such as paintings or decorated objects or some types of metallic samples. Furthermore, they often have rough surfaces and this makes a precise determination of the structure and composition harder. The standard quantitative XRF approach does not take into account this aspect. In this paper, we propose a novel approach based on a combined use of X-ray measurements performed with a polychromatic beam and Monte Carlo simulations. All the information contained in an X-ray spectrum is used. This approach allows obtaining a very good estimation of the sample contents both in terms of chemical elements and material thickness, and in this sense, represents an improvement of the possibility of XRF measurements. Some examples will be examined and discussed. (orig.)
Joshi, Kaushik; Chaudhuri, Santanu
2016-10-01
Ability to accelerate the morphological evolution of nanoscale precipitates is a fundamental challenge for atomistic simulations. Kinetic Monte Carlo (KMC) methodology is an effective approach for accelerating the evolution of nanoscale systems that are dominated by so-called rare events. The quality and accuracy of energy landscape used in KMC calculations can be significantly improved using DFT-informed interatomic potentials. Using newly developed computational framework that uses molecular simulator LAMMPS as a library function inside KMC solver SPPARKS, we investigated formation and growth of Guiner–Preston (GP) zones in dilute Al–Cu alloys at different temperature and copper concentrations. The KMC simulations with angular dependent potential (ADP) predict formation of coherent disc-shaped monolayers of copper atoms (GPI zones) in early stage. Such monolayers are then gradually transformed into energetically favored GPII phase that has two aluminum layers sandwiched between copper layers. We analyzed the growth kinetics of KMC trajectory using Johnson–Mehl–Avrami (JMA) theory and obtained a phase transformation index close to 1.0. In the presence of grain boundaries, the KMC calculations predict the segregation of copper atoms near the grain boundaries instead of formation of GP zones. The computational framework presented in this work is based on open source potentials and MD simulator and can predict morphological changes during the evolution of the alloys in the bulk and around grain boundaries.
Xiong, Chuan; Shi, Jiancheng
2014-01-01
To date, the light scattering models of snow consider very little about the real snow microstructures. The ideal spherical or other single shaped particle assumptions in previous snow light scattering models can cause error in light scattering modeling of snow and further cause errors in remote sensing inversion algorithms. This paper tries to build up a snow polarized reflectance model based on bicontinuous medium, with which the real snow microstructure is considered. The accurate specific surface area of bicontinuous medium can be analytically derived. The polarized Monte Carlo ray tracing technique is applied to the computer generated bicontinuous medium. With proper algorithms, the snow surface albedo, bidirectional reflectance distribution function (BRDF) and polarized BRDF can be simulated. The validation of model predicted spectral albedo and bidirectional reflectance factor (BRF) using experiment data shows good results. The relationship between snow surface albedo and snow specific surface area (SSA) were predicted, and this relationship can be used for future improvement of snow specific surface area (SSA) inversion algorithms. The model predicted polarized reflectance is validated and proved accurate, which can be further applied in polarized remote sensing.
International Nuclear Information System (INIS)
A univel geometry, neutral particle Monte Carlo transport code, written entirely in the Java programming language, is under development for medical radiotherapy applications. The code uses ENDF-VI based continuous energy cross section data in a flexible XML format. Full neutron-photon coupling, including detailed photon production and photonuclear reactions, is included. Charged particle equilibrium is assumed within the patient model so that detailed transport of electrons produced by photon interactions may be neglected. External beam and internal distributed source descriptions for mixed neutron-photon sources are allowed. Flux and dose tallies are performed on a univel basis. A four-tap, shift-register-sequence random number generator is used. Initial verification and validation testing of the basic neutron transport routines is underway. The searchlight problem was chosen as a suitable first application because of the simplicity of the physical model. Results show excellent agreement with analytic solutions. Computation times for similar numbers of histories are comparable to other neutron MC codes written in C and FORTRAN
Scalability tests of R-GMA based Grid job monitoring system for CMS Monte Carlo data production
Bonacorsi, D; Field, L; Fisher, S; Grandi, C; Hobson, P R; Kyberd, P; MacEvoy, B; Nebrensky, J J; Tallini, H; Traylen, S
2004-01-01
High Energy Physics experiments such as CMS (Compact Muon Solenoid) at the Large Hadron Collider have unprecedented, large-scale data processing computing requirements, with data accumulating at around 1 Gbyte/s. The Grid distributed computing paradigm has been chosen as the solution to provide the requisite computing power. The demanding nature of CMS software and computing requirements, such as the production of large quantities of Monte Carlo simulated data, makes them an ideal test case for the Grid and a major driver for the development of Grid technologies. One important challenge when using the Grid for large-scale data analysis is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. In this paper we report on the first measurements of R-GMA as part of a monitoring architecture to be used for b...
Wierling, Christoph; Kühn, Alexander; Hache, Hendrik; Daskalaki, Andriani; Maschke-Dutz, Elisabeth; Peycheva, Svetlana; Li, Jian; Herwig, Ralf; Lehrach, Hans
2012-08-15
Cancer is known to be a complex disease and its therapy is difficult. Much information is available on molecules and pathways involved in cancer onset and progression and this data provides a valuable resource for the development of predictive computer models that can help to identify new potential drug targets or to improve therapies. Modeling cancer treatment has to take into account many cellular pathways usually leading to the construction of large mathematical models. The development of such models is complicated by the fact that relevant parameters are either completely unknown, or can at best be measured under highly artificial conditions. Here we propose an approach for constructing predictive models of such complex biological networks in the absence of accurate knowledge on parameter values, and apply this strategy to predict the effects of perturbations induced by anti-cancer drug target inhibitions on an epidermal growth factor (EGF) signaling network. The strategy is based on a Monte Carlo approach, in which the kinetic parameters are repeatedly sampled from specific probability distributions and used for multiple parallel simulations. Simulation results from different forms of the model (e.g., a model that expresses a certain mutation or mutation pattern or the treatment by a certain drug or drug combination) can be compared with the unperturbed control model and used for the prediction of the perturbation effects. This framework opens the way to experiment with complex biological networks in the computer, likely to save costs in drug development and to improve patient therapy.
Study on the Uncertainty of the Available Time Under Ship Fire Based on Monte Carlo Sampling Method
Institute of Scientific and Technical Information of China (English)
WANG Jin-hui; CHU Guan-quan; LI Kai-yuan
2013-01-01
Available safety egress time under ship fire (SFAT) is critical to ship fire safety assessment,design and emergency rescue.Although it is available to determine SFAT by using fire models such as the two-zone fire model CFAST and the field model FDS,none of these models can address the uncertainties involved in the input parameters.To solve this problem,current study presents a framework of uncertainty analysis for SFAT.Firstly,a deterministic model estimating SFAT is built.The uncertainties of the input parameters are regarded as random variables with the given probability distribution functions.Subsequently,the deterministic SFAT model is employed to couple with a Monte Carlo sampling method to investigate the uncertainties of the SFAT.The Spearman's rank-order correlation coefficient (SRCC) is used to examine the sensitivity of each input uncertainty parameter on SFAT.To illustrate the proposed approach in detail,a case study is performed.Based on the proposed approach,probability density function and cumulative density function of SFAT are obtained.Furthermore,sensitivity analysis with regard to SFAT is also conducted.The results give a high-negative correlation of SFAT and the fire growth coefficient whereas the effect of other parameters is so weak that they can be neglected.
Pattern-oriented Agent-based Monte Carlo simulation of Cellular Redox Environment
DEFF Research Database (Denmark)
Tang, Jiaowei; Holcombe, Mike; Boonen, Harrie C.M.
/CYSS) and mitochondrial redox couples. Evidence suggests that both intracellular and extracellular redox can affect overall cell redox state. How redox is communicated between extracellular and intracellular environments is still a matter of debate. Some researchers conclude based on experimental data...... will be the agents [7]. Additionally, the spatial distribution of enzymes and reactants, and diffusion of reactants will be considered as a contributing factor. To initially simplify the modeling, the redox change of intracellular compartments will be ignored or only the export and import of redox will be modeled...... for Autonomous Agents and Multiagent Systems: Toronto, Canada. p. 1633-1636....
CRDIAC: Coupled Reactor Depletion Instrument with Automated Control
Energy Technology Data Exchange (ETDEWEB)
Steven K. Logan
2012-08-01
When modeling the behavior of a nuclear reactor over time, it is important to understand how the isotopes in the reactor will change, or transmute, over that time. This is especially important in the reactor fuel itself. Many nuclear physics modeling codes model how particles interact in the system, but do not model this over time. Thus, another code is used in conjunction with the nuclear physics code to accomplish this. In our code, Monte Carlo N-Particle (MCNP) codes and the Multi Reactor Transmutation Analysis Utility (MRTAU) were chosen as the codes to use. In this way, MCNP would produce the reaction rates in the different isotopes present and MRTAU would use cross sections generated from these reaction rates to determine how the mass of each isotope is lost or gained. Between these two codes, the information must be altered and edited for use. For this, a Python 2.7 script was developed to aid the user in getting the information in the correct forms. This newly developed methodology was called the Coupled Reactor Depletion Instrument with Automated Controls (CRDIAC). As is the case in any newly developed methodology for modeling of physical phenomena, CRDIAC needed to be verified against similar methodology and validated against data taken from an experiment, in our case AFIP-3. AFIP-3 was a reduced enrichment plate type fuel tested in the ATR. We verified our methodology against the MCNP Coupled with ORIGEN2 (MCWO) method and validated our work against the Post Irradiation Examination (PIE) data. When compared to MCWO, the difference in concentration of U-235 throughout Cycle 144A was about 1%. When compared to the PIE data, the average bias for end of life U-235 concentration was about 2%. These results from CRDIAC therefore agree with the MCWO and PIE data, validating and verifying CRDIAC. CRDIAC provides an alternative to using ORIGEN-based methodology, which is useful because CRDIAC's depletion code, MRTAU, uses every available isotope in its
An automated Monte-Carlo based method for the calculation of cascade summing factors
Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.
2016-10-01
A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.
Directory of Open Access Journals (Sweden)
S. Maiti
2011-03-01
Full Text Available Koyna region is well-known for its triggered seismic activities since the hazardous earthquake of M=6.3 occurred around the Koyna reservoir on 10 December 1967. Understanding the shallow distribution of resistivity pattern in such a seismically critical area is vital for mapping faults, fractures and lineaments. However, deducing true resistivity distribution from the apparent resistivity data lacks precise information due to intrinsic non-linearity in the data structures. Here we present a new technique based on the Bayesian neural network (BNN theory using the concept of Hybrid Monte Carlo (HMC/Markov Chain Monte Carlo (MCMC simulation scheme. The new method is applied to invert one and two-dimensional Direct Current (DC vertical electrical sounding (VES data acquired around the Koyna region in India. Prior to apply the method on actual resistivity data, the new method was tested for simulating synthetic signal. In this approach the objective/cost function is optimized following the Hybrid Monte Carlo (HMC/Markov Chain Monte Carlo (MCMC sampling based algorithm and each trajectory was updated by approximating the Hamiltonian differential equations through a leapfrog discretization scheme. The stability of the new inversion technique was tested in presence of correlated red noise and uncertainty of the result was estimated using the BNN code. The estimated true resistivity distribution was compared with the results of singular value decomposition (SVD-based conventional resistivity inversion results. Comparative results based on the HMC-based Bayesian Neural Network are in good agreement with the existing model results, however in some cases, it also provides more detail and precise results, which appears to be justified with local geological and structural details. The new BNN approach based on HMC is faster and proved to be a promising inversion scheme to interpret complex and non-linear resistivity problems. The HMC-based BNN results
International Nuclear Information System (INIS)
The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm3] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm3 and was sandwiched in between 0.05×0.05×0.3 cm3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×108 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular canal
International Nuclear Information System (INIS)
The depleted uranium is that in which percentage of uranium-235 fission executable is less than 0.2% or 0.3%. It is usually caused by the process of reprocessing the nuclear fuel burning, and also mixed with some other radioactive elements such as uranium 236, 238 and plutonium 239. The good features of the depleted uranium are its high density, low price and easily mined. So, the specifications for depleted uranium make it one of the best materials in case you need to have objects small in size, but quite heavy regarding its size. Uses of deplet ed uranium were relatively increased in domestic industrial uses as well as some uses in nuclear industry in the last few years. So it has increased uses in many areas of military and peaceful means such as: in balancing the giant air crafts, ships and missiles and in the manufacture of some types of concrete with severe hardness. (author)
DEPLETION POTENTIAL OF COLLOIDS:A DIRECT SIMULATION STUDY
Institute of Scientific and Technical Information of China (English)
李卫华; 薛松; 马红孺
2001-01-01
The depletion interaction between abig sphere and a hard wall and between two big hard spheres in a hard sphere colloidal sytem was studied by the Monte Carlo method.Direct simulation of free energy difference was performed by means of the Acceptance Ratio Method (ARM).
Tsukamoto, Tetsuo; Yamamoto, Hiroyuki; Okada, Seiji; Matano, Tetsuro
2016-09-01
Although antiretroviral therapy has made human immunodeficiency virus (HIV) infection a controllable disease, it is still unclear how viral replication persists in untreated patients and causes CD4(+) T-cell depletion leading to acquired immunodeficiency syndrome (AIDS) in several years. Theorists tried to explain it with the diversity threshold theory in which accumulated mutations in the HIV genome make the virus so diverse that the immune system will no longer be able to recognize all the variants and fail to control the viraemia. Although the theory could apply to a number of cases, macaque AIDS models using simian immunodeficiency virus (SIV) have shown that failed viral control at the set point is not always associated with T-cell escape mutations. Moreover, even monkeys without a protective major histocompatibility complex (MHC) allele can contain replication of a super infected SIV following immunization with a live-attenuated SIV vaccine, while those animals are not capable of fighting primary SIV infection. Here we propose a recursion-based virus-specific naive CD4(+) T-cell depletion hypothesis through thinking on what may happen in individuals experiencing primary immunodeficiency virus infection. This could explain the mechanism for impairment of virus-specific immune response in the course of HIV infection. PMID:27515208
Atriana Palma, Bianey; Ureba Sánchez, Ana; Salguero, Francisco Javier; Arráns, Rafael; Míguez Sánchez, Carlos; Walls Zurita, Amadeo; Romero Hermida, María Isabel; Leal, Antonio
2012-03-01
The purpose of this study was to present a Monte-Carlo (MC)-based optimization procedure to improve conventional treatment plans for accelerated partial breast irradiation (APBI) using modulated electron beams alone or combined with modulated photon beams, to be delivered by a single collimation device, i.e. a photon multi-leaf collimator (xMLC) already installed in a standard hospital. Five left-sided breast cases were retrospectively planned using modulated photon and/or electron beams with an in-house treatment planning system (TPS), called CARMEN, and based on MC simulations. For comparison, the same cases were also planned by a PINNACLE TPS using conventional inverse intensity modulated radiation therapy (IMRT). Normal tissue complication probability for pericarditis, pneumonitis and breast fibrosis was calculated. CARMEN plans showed similar acceptable planning target volume (PTV) coverage as conventional IMRT plans with 90% of PTV volume covered by the prescribed dose (Dp). Heart and ipsilateral lung receiving 5% Dp and 15% Dp, respectively, was 3.2-3.6 times lower for CARMEN plans. Ipsilateral breast receiving 50% Dp and 100% Dp was an average of 1.4-1.7 times lower for CARMEN plans. Skin and whole body low-dose volume was also reduced. Modulated photon and/or electron beams planned by the CARMEN TPS improve APBI treatments by increasing normal tissue sparing maintaining the same PTV coverage achieved by other techniques. The use of the xMLC, already installed in the linac, to collimate photon and electron beams favors the clinical implementation of APBI with the highest efficiency.
Monte Carlo simulation of moderator and reflector in coal analyzer based on a D-T neutron generator.
Shan, Qing; Chu, Shengnan; Jia, Wenbao
2015-11-01
Coal is one of the most popular fuels in the world. The use of coal not only produces carbon dioxide, but also contributes to the environmental pollution by heavy metals. In prompt gamma-ray neutron activation analysis (PGNAA)-based coal analyzer, the characteristic gamma rays of C and O are mainly induced by fast neutrons, whereas thermal neutrons can be used to induce the characteristic gamma rays of H, Si, and heavy metals. Therefore, appropriate thermal and fast neutrons are beneficial in improving the measurement accuracy of heavy metals, and ensure that the measurement accuracy of main elements meets the requirements of the industry. Once the required yield of the deuterium-tritium (d-T) neutron generator is determined, appropriate thermal and fast neutrons can be obtained by optimizing the neutron source term. In this article, the Monte Carlo N-Particle (MCNP) Transport Code and Evaluated Nuclear Data File (ENDF) database are used to optimize the neutron source term in PGNAA-based coal analyzer, including the material and shape of the moderator and neutron reflector. The optimized targets include two points: (1) the ratio of the thermal to fast neutron is 1:1 and (2) the total neutron flux from the optimized neutron source in the sample increases at least 100% when compared with the initial one. The simulation results show that, the total neutron flux in the sample increases 102%, 102%, 85%, 72%, and 62% with Pb, Bi, Nb, W, and Be reflectors, respectively. Maximum optimization of the targets is achieved when the moderator is a 3-cm-thick lead layer coupled with a 3-cm-thick high-density polyethylene (HDPE) layer, and the neutron reflector is a 27-cm-thick hemispherical lead layer.
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-01-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions.
Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming
2016-07-01
Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model
Tian, Zhen; Li, Yongbao; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-01-01
We recently built an analytical source model for GPU-based MC dose engine. In this paper, we present a sampling strategy to efficiently utilize this source model in GPU-based dose calculation. Our source model was based on a concept of phase-space-ring (PSR). This ring structure makes it effective to account for beam rotational symmetry, but not suitable for dose calculations due to rectangular jaw settings. Hence, we first convert PSR source model to its phase-space let (PSL) representation. Then in dose calculation, different types of sub-sources were separately sampled. Source sampling and particle transport were iterated. So that the particles being sampled and transported simultaneously are of same type and close in energy to alleviate GPU thread divergence. We also present an automatic commissioning approach to adjust the model for a good representation of a clinical linear accelerator . Weighting factors were introduced to adjust relative weights of PSRs, determined by solving a quadratic minimization ...
Energy Technology Data Exchange (ETDEWEB)
Dieudonne, C.; Dumonteil, E.; Malvagi, F.; Diop, C. M. [Commissariat a l' Energie Atomique et aux Energies Alternatives CEA, Service d' Etude des Reacteurs et de Mathematiques Appliquees, DEN/DANS/DM2S/SERMA/LTSD, F91191 Gif-sur-Yvette cedex (France)
2013-07-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple a Monte Carlo code to simulate the neutron transport to a deterministic method that computes the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3 dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the time-expensive Monte Carlo solver called at each time step. Therefore, great improvements in term of calculation time could be expected if one could get rid of Monte Carlo transport sequences. For example, it may seem interesting to run an initial Monte Carlo simulation only once, for the first time/burnup step, and then to use the concentration perturbation capability of the Monte Carlo code to replace the other time/burnup steps (the different burnup steps are seen like perturbations of the concentrations of the initial burnup step). This paper presents some advantages and limitations of this technique and preliminary results in terms of speed up and figure of merit. Finally, we will detail different possible calculation scheme based on that method. (authors)
Energy Technology Data Exchange (ETDEWEB)
Zwermann, W.; Aures, A.; Gallner, L.; Hannstein, V.; Krazykacz-Hausmann, B.; Velkov, K. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Garching (Germany); Martinez, J. S. [Dept. of Nuclear Engineering, Universidad Politecnica de Madrid, Madrid (Spain)
2014-06-15
Uncertainty and sensitivity analyses with respect to nuclear data are performed with depletion calculations for BWR and PWR fuel assemblies specified in the framework of the UAM-LWR Benchmark Phase II. For this, the GRS sampling based tool XSUSA is employed together with the TRITON depletion sequences from the SCALE 6.1 code system. Uncertainties for multiplication factors and nuclide inventories are determined, as well as the main contributors to these result uncertainties by calculating importance indicators. The corresponding neutron transport calculations are performed with the deterministic discrete-ordinates code NEWT. In addition, the Monte Carlo code KENO in multi-group mode is used to demonstrate a method with which the number of neutron histories per calculation run can be substantially reduced as compared to that in a calculation for the nominal case without uncertainties, while uncertainties and sensitivities are obtained with almost the same accuracy.
International Nuclear Information System (INIS)
Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box
International Nuclear Information System (INIS)
This paper presents a novel decision-support tool for assessing future generation portfolios in an increasingly uncertain electricity industry. The tool combines optimal generation mix concepts with Monte Carlo simulation and portfolio analysis techniques to determine expected overall industry costs, associated cost uncertainty, and expected CO2 emissions for different generation portfolio mixes. The tool can incorporate complex and correlated probability distributions for estimated future fossil-fuel costs, carbon prices, plant investment costs, and demand, including price elasticity impacts. The intent of this tool is to facilitate risk-weighted generation investment and associated policy decision-making given uncertainties facing the electricity industry. Applications of this tool are demonstrated through a case study of an electricity industry with coal, CCGT, and OCGT facing future uncertainties. Results highlight some significant generation investment challenges, including the impacts of uncertain and correlated carbon and fossil-fuel prices, the role of future demand changes in response to electricity prices, and the impact of construction cost uncertainties on capital intensive generation. The tool can incorporate virtually any type of input probability distribution, and support sophisticated risk assessments of different portfolios, including downside economic risks. It can also assess portfolios against multi-criterion objectives such as greenhouse emissions as well as overall industry costs. - Highlights: ► Present a decision support tool to assist generation investment and policy making under uncertainty. ► Generation portfolios are assessed based on their expected costs, risks, and CO2 emissions. ► There is tradeoff among expected cost, risks, and CO2 emissions of generation portfolios. ► Investment challenges include economic impact of uncertainties and the effect of price elasticity. ► CO2 emissions reduction depends on the mix of
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-10-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun
2015-09-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
Specification for the VERA Depletion Benchmark Suite
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-12-17
CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.
Energy Technology Data Exchange (ETDEWEB)
Becker, N.M. [Los Alamos National Lab., NM (United States); Vanta, E.B. [Wright Laboratory Armament Directorate, Eglin Air Force Base, FL (United States)
1995-05-01
Hydrologic investigations on depleted uranium fate and transport associated with dynamic testing activities were instituted in the 1980`s at Los Alamos National Laboratory and Eglin Air Force Base. At Los Alamos, extensive field watershed investigations of soil, sediment, and especially runoff water were conducted. Eglin conducted field investigations and runoff studies similar to those at Los Alamos at former and active test ranges. Laboratory experiments complemented the field investigations at both installations. Mass balance calculations were performed to quantify the mass of expended uranium which had transported away from firing sites. At Los Alamos, it is estimated that more than 90 percent of the uranium still remains in close proximity to firing sites, which has been corroborated by independent calculations. At Eglin, we estimate that 90 to 95 percent of the uranium remains at test ranges. These data demonstrate that uranium moves slowly via surface water, in both semi-arid (Los Alamos) and humid (Eglin) environments.
International Nuclear Information System (INIS)
Hydrologic investigations on depleted uranium fate and transport associated with dynamic testing activities were instituted in the 1980's at Los Alamos National Laboratory and Eglin Air Force Base. At Los Alamos, extensive field watershed investigations of soil, sediment, and especially runoff water were conducted. Eglin conducted field investigations and runoff studies similar to those at Los Alamos at former and active test ranges. Laboratory experiments complemented the field investigations at both installations. Mass balance calculations were performed to quantify the mass of expended uranium which had transported away from firing sites. At Los Alamos, it is estimated that more than 90 percent of the uranium still remains in close proximity to firing sites, which has been corroborated by independent calculations. At Eglin, we estimate that 90 to 95 percent of the uranium remains at test ranges. These data demonstrate that uranium moves slowly via surface water, in both semi-arid (Los Alamos) and humid (Eglin) environments
International Nuclear Information System (INIS)
In tokamak-type DT nuclear fusion reactor, there are various type slits and ducts in the blanket and the vacuum vessel. The helium production in the rewelding location of the blanket and the vacuum vessel, the nuclear properties in the super-conductive TF coil, e.g. the nuclear heating rate in the coil winding pack, are enhanced by the radiation streaming through the slits and ducts, and they are critical concern in the shielding design. The decay gamma ray dose rate around the duct penetrating the blanket and the vacuum vessel is also enhanced by the radiation streaming through the duct, and they are also critical concern from the view point of the human access to the cryostat during maintenance. In order to evaluate these nuclear properties with good accuracy, three dimensional Monte Carlo calculation is required but requires long calculation time. Therefore, the development of the effective simple design evaluation method for radiation streaming is substantially important. This study aims to establish the systematic evaluation method for the nuclear properties of the blanket, the vacuum vessel and the Toroidal Field (TF) coil taking into account the radiation streaming through various types of slits and ducts, based on three dimensional Monte Carlo calculation using the MNCP code, and for the decay gamma ray dose rates penetrated around the ducts. The present thesis describes three topics in five chapters as follows; 1) In Chapter 2, the results calculated by the Monte Carlo code, MCNP, are compared with those by the Sn code, DOT3.5, for the radiation streaming in the tokamak-type nuclear fusion reactor, for validating the results of the Sn calculation. From this comparison, the uncertainties of the Sn calculation results coming from the ray-effect and the effect due to approximation of the geometry are investigated whether the two dimensional Sn calculation can be applied instead of the Monte Carlo calculation. Through the study, it can be concluded that the
Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi
2014-11-01
The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm3] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm3 and was sandwiched in between 0.05×0.05×0.3 cm3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×108 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular canal. Dose
International Nuclear Information System (INIS)
-Small animal PET allows qualitative assessment and quantitative measurement of biochemical processes in vivo, but the accuracy and reproducibility of imaging results can be affected by several parameters. The first aim of this study was to investigate the performance of different CT-based attenuation correction strategies and assess the resulting impact on PET images. The absorbed dose in different tissues caused by scanning procedures was also discussed to minimize biologic damage generated by radiation exposure due to PET/CT scanning. A small animal PET/CT system was modeled based on Monte Carlo simulation to generate imaging results and dose distribution. Three energy mapping methods, including the bilinear scaling method, the dual-energy method and the hybrid method which combines the kVp conversion and the dual-energy method, were investigated comparatively through assessing the accuracy of estimating linear attenuation coefficient at 511 keV and the bias introduced into PET quantification results due to CT-based attenuation correction. Our results showed that the hybrid method outperformed the bilinear scaling method, while the dual-energy method achieved the highest accuracy among the three energy mapping methods. Overall, the accuracy of PET quantification results have similar trend as that for the estimation of linear attenuation coefficients, whereas the differences between the three methods are more obvious in the estimation of linear attenuation coefficients than in the PET quantification results. With regards to radiation exposure from CT, the absorbed dose ranged between 7.29-45.58 mGy for 50-kVp scan and between 6.61-39.28 mGy for 80-kVp scan. For 18F radioactivity concentration of 1.86x105 Bq/ml, the PET absorbed dose was around 24 cGy for tumor with a target-to-background ratio of 8. The radiation levels for CT scans are not lethal to the animal, but concurrent use of PET in longitudinal study can increase the risk of biological effects. The
Depletion Interactions in a Cylindric Pipeline with a Small Shape Change
Institute of Scientific and Technical Information of China (English)
LI Chun-Shu; GAO Hai-Xia; XIAO Chang-Ming
2007-01-01
Stressed by external forces, it is possible for a cylindric pipeline to change into an elliptic pipeline. To expose the effect of small shape change of the pipeline on the depletion interactions, both the depletion potentials and depletion forces in the hard sphere systems confined by a cylindric pipeline or by an elliptic pipeline are studied by Monte Carlo simulations. The numerical results show that the depletion interactions are strongly affected by the small change of the shape of the pipeline in a way. Furthermore, it is also found that the depletion interactions will be strengthened if the short axis of the elliptic pipeline is decreased.
Institute of Scientific and Technical Information of China (English)
Jiang Wei; Xiang Haige
2004-01-01
This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness.
Grevillot, L.; Bertrand, D.; Dessy, F.; Freud, N.; Sarrut, D.
2012-07-01
Active scanning delivery systems take full advantage of ion beams to best conform to the tumor and to spare surrounding healthy tissues; however, it is also a challenging technique for quality assurance. In this perspective, we upgraded the GATE/GEANT4 Monte Carlo platform in order to recalculate the treatment planning system (TPS) dose distributions for active scanning systems. A method that allows evaluating the TPS dose distributions with the GATE Monte Carlo platform has been developed and applied to the XiO TPS (Elekta), for the IBA proton pencil beam scanning (PBS) system. First, we evaluated the specificities of each dose engine. A dose-conversion scheme that allows one to convert dose to medium into dose to water was implemented within GATE. Specific test cases in homogeneous and heterogeneous configurations allowed for the estimation of the differences between the beam models implemented in XiO and GATE. Finally, dose distributions of a prostate treatment plan were compared. In homogeneous media, a satisfactory agreement was generally obtained between XiO and GATE. The maximum stopping power difference of 3% occurred in a human tissue of 0.9 g cm-3 density and led to a significant range shift. Comparisons in heterogeneous configurations pointed out the limits of the TPS dose calculation accuracy and the superiority of Monte Carlo simulations. The necessity of computing dose to water in our Monte Carlo code for comparisons with TPSs is also presented. Finally, the new capabilities of the platform are applied to a prostate treatment plan and dose differences between both dose engines are analyzed in detail. This work presents a generic method to compare TPS dose distributions with the GATE Monte Carlo platform. It is noteworthy that GATE is also a convenient tool for imaging applications, therefore opening new research possibilities for the PBS modality.
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
Obot, I. B.; Kaya, Savaş; Kaya, Cemal; Tüzün, Burak
2016-06-01
DFT and Monte Carlo simulation were performed on three Schiff bases namely, 4-(4-bromophenyl)-N‧-(4-methoxybenzylidene)thiazole-2-carbohydrazide (BMTC), 4-(4-bromophenyl)-N‧-(2,4-dimethoxybenzylidene)thiazole-2-carbohydrazide (BDTC), 4-(4-bromophenyl)-N‧-(4-hydroxybenzylidene)thiazole-2-carbohydrazide (BHTC) recently studied as corrosion inhibitor for steel in acid medium. Electronic parameters relevant to their inhibition activity such as EHOMO, ELUMO, Energy gap (ΔE), hardness (η), softness (σ), the absolute electronegativity (χ), proton affinity (PA) and nucleophilicity (ω) etc., were computed and discussed. Monte Carlo simulations were applied to search for the most stable configuration and adsorption energies for the interaction of the inhibitors with Fe (110) surface. The theoretical data obtained are in most cases in agreement with experimental results.
TH-A-18C-04: Ultrafast Cone-Beam CT Scatter Correction with GPU-Based Monte Carlo Simulation
Energy Technology Data Exchange (ETDEWEB)
Xu, Y [UT Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou (China); Bai, T [UT Southwestern Medical Center, Dallas, TX (United States); Xi' an Jiaotong University, Xi' an (China); Yan, H; Ouyang, L; Wang, J; Pompos, A; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou (China)
2014-06-15
Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT). We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC) simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstructions within 30 seconds. Methods: The method consists of six steps: 1) FDK reconstruction using raw projection data; 2) Rigid Registration of planning CT to the FDK results; 3) MC scatter calculation at sparse view angles using the planning CT; 4) Interpolation of the calculated scatter signals to other angles; 5) Removal of scatter from the raw projections; 6) FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC scatter noise caused by low photon numbers. The method is validated on head-and-neck cases with simulated and clinical data. Results: We have studied impacts of photo histories, volume down sampling factors on the accuracy of scatter estimation. The Fourier analysis was conducted to show that scatter images calculated at 31 angles are sufficient to restore those at all angles with <0.1% error. For the simulated case with a resolution of 512×512×100, we simulated 10M photons per angle. The total computation time is 23.77 seconds on a Nvidia GTX Titan GPU. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU. Similar results were found for a real patient case. Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. The whole process of scatter correction and reconstruction is accomplished within 30 seconds. This study is supported in part by NIH (1R01CA154747-01), The Core Technology Research
Stratospheric ozone depletion.
Rowland, F Sherwood
2006-05-29
Solar ultraviolet radiation creates an ozone layer in the atmosphere which in turn completely absorbs the most energetic fraction of this radiation. This process both warms the air, creating the stratosphere between 15 and 50 km altitude, and protects the biological activities at the Earth's surface from this damaging radiation. In the last half-century, the chemical mechanisms operating within the ozone layer have been shown to include very efficient catalytic chain reactions involving the chemical species HO, HO2, NO, NO2, Cl and ClO. The NOX and ClOX chains involve the emission at Earth's surface of stable molecules in very low concentration (N2O, CCl2F2, CCl3F, etc.) which wander in the atmosphere for as long as a century before absorbing ultraviolet radiation and decomposing to create NO and Cl in the middle of the stratospheric ozone layer. The growing emissions of synthetic chlorofluorocarbon molecules cause a significant diminution in the ozone content of the stratosphere, with the result that more solar ultraviolet-B radiation (290-320 nm wavelength) reaches the surface. This ozone loss occurs in the temperate zone latitudes in all seasons, and especially drastically since the early 1980s in the south polar springtime-the 'Antarctic ozone hole'. The chemical reactions causing this ozone depletion are primarily based on atomic Cl and ClO, the product of its reaction with ozone. The further manufacture of chlorofluorocarbons has been banned by the 1992 revisions of the 1987 Montreal Protocol of the United Nations. Atmospheric measurements have confirmed that the Protocol has been very successful in reducing further emissions of these molecules. Recovery of the stratosphere to the ozone conditions of the 1950s will occur slowly over the rest of the twenty-first century because of the long lifetime of the precursor molecules. PMID:16627294
Shear-affected depletion interaction
July, C.; Kleshchanok, D.; Lang, P.R.
2012-01-01
We investigate the influence of flow fields on the strength of the depletion interaction caused by disc-shaped depletants. At low mass concentration of discs, it is possible to continuously decrease the depth of the depletion potential by increasing the applied shear rate until the depletion force i
Brake, te B.; Hanssen, R.F.; Ploeg, van der M.J.; Rooij, de G.H.
2013-01-01
Satellite-based radar interferometry is a technique capable of measuring small surface elevation changes at large scales and with a high resolution. In vadose zone hydrology, it has been recognized for a long time that surface elevation changes due to swell and shrinkage of clayey soils can serve as
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
International Nuclear Information System (INIS)
Through the Monte Carlo (MC) simulation of 6 and 10 MV flattening-filter-free (FFF) beams from Varian TrueBeam accelerator, this study aims to find the best incident electron distribution for further studying the small field characteristics of these beams. By incorporating the training materials of Varian on the geometry and material parameters of TrueBeam Linac head, the 6 and 10 MV FFF beams were modelled using the BEAMnrc and DOSXYZnrc codes, where the percentage depth doses (PDDs) and the off-axis ratios (OARs) curves of fields ranging from 4 × 4 to 40 × 40 cm2 were simulated for both energies by adjusting the incident beam energy, radial intensity distribution and angular spread, respectively. The beam quality and relative output factor (ROF) were calculated. The simulations and measurements were compared using Gamma analysis method provided by Verisoft program (PTW, Freiburg, Germany), based on which the optimal MC model input parameters were selected and were further used to investigate the beam characteristics of small fields. The Full Width Half Maximum (FWHM), mono-energetic energy and angular spread of the resultant incident Gaussian radial intensity electron distribution were 0.75 mm, 6.1 MeV and 0.9° for the nominal 6 MV FFF beam, and 0.7 mm, 10.8 MeV and 0.3° for the nominal 10 MV FFF beam respectively. The simulation was mostly comparable to the measurement. Gamma criteria of 1 mm/1 % (local dose) can be met by all PDDs of fields larger than 1 × 1 cm2, and by all OARs of no larger than 20 × 20 cm2, otherwise criteria of 1 mm/2 % can be fulfilled. Our MC simulated ROFs agreed well with the measured ROFs of various field sizes (the discrepancies were less than 1 %), except for the 1 × 1 cm2 field. The MC simulation agrees well with the measurement and the proposed model parameters can be clinically used for further dosimetric studies of 6 and 10 MV FFF beams
Energy Technology Data Exchange (ETDEWEB)
Reboredo, Fernando A.; Kim, Jeongnim [Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)
2014-02-21
A statistical method is derived for the calculation of thermodynamic properties of many-body systems at low temperatures. This method is based on the self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo, J. Chem. Phys. 136, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. 89, 6316 (1988)]. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric guiding wave functions. In the process we obtain a parallel algorithm that optimizes a small subspace of the many-body Hilbert space to provide maximum overlap with the subspace spanned by the lowest-energy eigenstates of a many-body Hamiltonian. We show in a model system that the partition function is progressively maximized within this subspace. We show that the subspace spanned by the small basis systematically converges towards the subspace spanned by the lowest energy eigenstates. Possible applications of this method for calculating the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can also be used to accelerate the calculation of the ground or excited states with quantum Monte Carlo.
Asadi, Somayeh; Masoudi, S Farhad; Rahmani, Faezeh
2014-01-01
Materials of high atomic number such as gold, can provide a high probability for photon interaction by photoelectric effects during radiation therapy. In cancer therapy, the object of brachytherapy as a kind of radiotherapy is to deliver adequate radiation dose to tumor while sparing surrounding healthy tissue. Several studies demonstrated that the preferential accumulation of gold nanoparticles within the tumor can enhance the absorbed dose by the tumor without increasing the radiation dose delivered externally. Accordingly, the required time for tumor irradiation decreases as the estimated adequate radiation dose for tumor is provided following this method. The dose delivered to healthy tissue is reduced when the time of irradiation is decreased. Hear, GNPs effects on choroidal Melanoma dosimetry is discussed by Monte Carlo study. Monte Carlo Ophthalmic brachytherapy dosimetry usually, is studied by simulation of water phantom. Considering the composition and density of eye material instead of water in thes...
Energy Technology Data Exchange (ETDEWEB)
Wuerl, Matthias
2016-08-01
Matthias Wuerl presents two essential steps to implement offline PET monitoring of proton dose delivery at a clinical facility, namely the setting up of an accurate Monte Carlo model of the clinical beamline and the experimental validation of positron emitter production cross-sections. In the first part, the field size dependence of the dose output is described for scanned proton beams. Both the Monte Carlo and an analytical computational beam model were able to accurately predict target dose, while the latter tends to overestimate dose in normal tissue. In the second part, the author presents PET measurements of different phantom materials, which were activated by the proton beam. The results indicate that for an irradiation with a high number of protons for the sake of good statistics, dead time losses of the PET scanner may become important and lead to an underestimation of positron-emitter production yields.
PCXMC. A PC-based Monte Carlo program for calculating patient doses in medical x-ray examinations
International Nuclear Information System (INIS)
The report describes PCXMC, a Monte Carlo program for calculating patients' organ doses and the effective dose in medical x-ray examinations. The organs considered are: the active bone marrow, adrenals, brain, breasts, colon (upper and lower large intestine), gall bladder, heats, kidneys, liver, lungs, muscle, oesophagus, ovaries, pancreas, skeleton, skin, small intestine, spleen, stomach, testes, thymes, thyroid, urinary bladder, and uterus. (42 refs.)
Energy Technology Data Exchange (ETDEWEB)
Zhuang Guilin, E-mail: glzhuang@zjut.edu.cn [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Chen Wulin [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Zheng Jun [Center of Modern Experimental Technology, Anhui University, Hefei 230039 (China); Yu Huiyou [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Wang Jianguo, E-mail: jgw@zjut.edu.cn [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China)
2012-08-15
A series of lanthanide coordination polymers have been obtained through the hydrothermal reaction of N-(sulfoethyl) iminodiacetic acid (H{sub 3}SIDA) and Ln(NO{sub 3}){sub 3} (Ln=La, 1; Pr, 2; Nd, 3; Gd, 4). Crystal structure analysis exhibits that lanthanide ions affect the coordination number, bond length and dimension of compounds 1-4, which reveal that their structure diversity can be attributed to the effect of lanthanide contraction. Furthermore, the combination of magnetic measure with quantum Monte Carlo(QMC) studies exhibits that the coupling parameters between two adjacent Gd{sup 3+} ions for anti-anti and syn-anti carboxylate bridges are -1.0 Multiplication-Sign 10{sup -3} and -5.0 Multiplication-Sign 10{sup -3} cm{sup -1}, respectively, which reveals weak antiferromagnetic interaction in 4. - Graphical abstract: Four lanthanide coordination polymers with N-(sulfoethyl) iminodiacetic acid were obtained under hydrothermal condition and reveal the weak antiferromagnetic coupling between two Gd{sup 3+} ions by Quantum Monte Carlo studies. Highlights: Black-Right-Pointing-Pointer Four lanthanide coordination polymers of H{sub 3}SIDA ligand were obtained. Black-Right-Pointing-Pointer Lanthanide ions play an important role in their structural diversity. Black-Right-Pointing-Pointer Magnetic measure exhibits that compound 4 features antiferromagnetic property. Black-Right-Pointing-Pointer Quantum Monte Carlo studies reveal the coupling parameters of two Gd{sup 3+} ions.
Gontcharova, Viktoria; Youn, Eunseog; Wolcott, Randall D; Hollister, Emily B; Gentry, Terry J; Dowd, Scot E
2010-01-01
The existing chimera detection programs are not specifically designed for "next generation" sequence data. Technologies like Roche 454 FLX and Titanium have been adapted over the past years especially with the introduction of bacterial tag-encoded FLX/Titanium amplicon pyrosequencing methodologies to produce over one million 250-600 bp 16S rRNA gene reads that need to be depleted of chimeras prior to downstream analysis. Meeting the needs of basic scientists who are venturing into high-throughput microbial diversity studies such as those based upon pyrosequencing and specifically providing a solution for Windows users, the B2C2 software is designed to be able to accept files containing large multi-FASTA formatted sequences and screen for possible chimeras in a high throughput fashion. The graphical user interface (GUI) is also able to batch process multiple files. When compared to popular chimera screening software the B2C2 performed as well or better while dramatically decreasing the amount of time required generating and screening results. Even average computer users are able to interact with the Windows .Net GUI-based application and define the stringency to which the analysis should be done. B2C2 may be downloaded from http://www.researchandtesting.com/B2C2. PMID:21339894
Casas, Ricard; Cardiel-Sas, Laia; Castander, Francisco J.; Jiménez, Jorge; de Vicente, Juan
2014-08-01
The focal plane of the PAU camera is composed of eighteen 2K x 4K CCDs. These devices, plus four spares, were provided by the Japanese company Hamamatsu Photonics K.K. with type no. S10892-04(X). These detectors are 200 μm thick fully depleted and back illuminated with an n-type silicon base. They have been built with a specific coating to be sensitive in the range from 300 to 1,100 nm. Their square pixel size is 15 μm. The read-out system consists of a Monsoon controller (NOAO) and the panVIEW software package. The deafualt CCD read-out speed is 133 kpixel/s. This is the value used in the calibration process. Before installing these devices in the camera focal plane, they were characterized using the facilities of the ICE (CSIC- IEEC) and IFAE in the UAB Campus in Bellaterra (Barcelona, Catalonia, Spain). The basic tests performed for all CCDs were to obtain the photon transfer curve (PTC), the charge transfer efficiency (CTE) using X-rays and the EPER method, linearity, read-out noise, dark current, persistence, cosmetics and quantum efficiency. The X-rays images were also used for the analysis of the charge diffusion for different substrate voltages (VSUB). Regarding the cosmetics, and in addition to white and dark pixels, some patterns were also found. The first one, which appears in all devices, is the presence of half circles in the external edges. The origin of this pattern can be related to the assembly process. A second one appears in the dark images, and shows bright arcs connecting corners along the vertical axis of the CCD. This feature appears in all CCDs exactly in the same position so our guess is that the pattern is due to electrical fields. Finally, and just in two devices, there is a spot with wavelength dependence whose origin could be the result of a defectous coating process.
Fast Monte Carlo based joint iterative reconstruction for simultaneous 99mTc/ 123I SPECT imaging.
Ouyang, Jinsong; El Fakhri, Georges; Moore, Stephen C
2007-08-01
Simultaneous 99mTC/ 123I SPECT allows the assessment of two physiological functions under identical conditions. The separation of these radionuclides is difficult, however, because their energies are close. Most energy-window-based scatter correction methods do not fully model either physical factors or patient-specific activity and attenuation distributions. We have developed a fast Monte Carlo (MC) simulation-based multiple-radionuclide and multiple-energy joint ordered-subset expectation-maximization (JOSEM) iterative reconstruction algorithm, MC-JOSEM. MC-JOSEM simultaneously corrects for scatter and cross talk as well as detector response within the reconstruction algorithm. We evaluated MC-JOSEM for simultaneous brain profusion (99mTc-HMPAO) and neurotransmission (123I-altropane) SPECT. MC simulations of 99mTc and 123I studies were generated separately and then combined to mimic simultaneous 99mTc/ 123I SPECT. All the details of photon transport through the brain, the collimator, and detector, including Compton and coherent scatter, septal penetration, and backscatter from components behind the crystal, were modeled. We reconstructed images from simultaneous dual-radionuclide projections in three ways. First, we reconstructed the photopeak-energy-window projections (with an asymmetric energy window for 1231) using the standard ordered-subsets expectation-maximization algorithm (NSC-OSEM). Second, we used standard OSEM to reconstruct 99mTc photopeak-energy-window projections, while including an estimate of scatter from a Compton-scatter energy window (SC-OSEM). Third, we jointly reconstructed both 99mTc and 123I images using projection data associated with two photo-peak energy windows and an intermediate-energy window using MC-JOSEM. For 15 iterations of reconstruction, the bias and standard deviation of 99mTc activity estimates in several brain structures were calculated for NSC-OSEM, SC-OSEM, and MC-JOSEM, using images reconstructed from primary
Seipt, D; Marklund, M; Bulanov, S S
2016-01-01
The interaction of charged particles and photons with intense electromagnetic fields gives rise to multi-photon Compton and Breit-Wheeler processes. These are usually described in the framework of the external field approximation, where the electromagnetic field is assumed to have infinite energy. However, the multi-photon nature of these processes implies the absorption of a significant number of photons, which scales as the external field amplitude cubed. As a result, the interaction of a highly charged electron bunch with an intense laser pulse can lead to significant depletion of the laser pulse energy, thus rendering the external field approximation invalid. We provide relevant estimates for this depletion and find it to become important in the interaction between fields of amplitude $a_0 \\sim 10^3$ and electron bunches with charges of the order of nC.
Capital expenditure and depletion
Energy Technology Data Exchange (ETDEWEB)
Rech, O.; Saniere, A
2003-07-01
In the future, the increase in oil demand will be covered for the most part by non conventional oils, but conventional sources will continue to represent a preponderant share of the world oil supply. Their depletion represents a complex challenge involving technological, economic and political factors. At the same time, there is reason for concern about the decrease in exploration budgets at the major oil companies. (author)
Capital expenditure and depletion
International Nuclear Information System (INIS)
In the future, the increase in oil demand will be covered for the most part by non conventional oils, but conventional sources will continue to represent a preponderant share of the world oil supply. Their depletion represents a complex challenge involving technological, economic and political factors. At the same time, there is reason for concern about the decrease in exploration budgets at the major oil companies. (author)
Directory of Open Access Journals (Sweden)
Kohei Arai
2013-04-01
Full Text Available Comparative study on linear and nonlinear mixed pixel models of which pixels in remote sensing satellite images is composed with plural ground cover materials mixed together, is conducted for remote sensing satellite image analysis. The mixed pixel models are based on Cierniewski of ground surface reflectance model. The comparative study is conducted by using of Monte Carlo Ray Tracing: MCRT simulations. Through simulation study, the difference between linear and nonlinear mixed pixel models is clarified. Also it is found that the simulation model is validated.
Edimo, Paul; Kwato Njock, M.G.; Vynckier, Stefaan
2013-01-01
The purpose of the present study is to perform a clinical validation of a new commercial Monte Carlo (MC) based treatment planning system (TPS) for electron beams, i.e. the XiO 4.60 electron MC (XiO eMC). Firstly, MC models for electron beams (4, 8, 12 and 18MeV) have been simulated using BEAMnrc user code and validated by measurements in a homogeneous water phantom. Secondly, these BEAMnrc models have been set as the reference tool to evaluate the ability of XiO eMC to reproduce dose perturb...
International Nuclear Information System (INIS)
We have established a dynamic scenario quantification method based on the coupling of a Continuous Markov Monte Carlo (CMMC) method and a plant thermal-hydraulics analysis code for level 2 PSA (probabilistic safety assessment). This paper presents meta-analysis coupling model to obtain the dynamic scenario quantification with a reasonable computational cost. The PLOHS (protected-loss-of-heat-sink) accident of a liquid sodium fast reactor is selected as the level 2 PSA scenario in the model. Furthermore, we also discuss categorizing methods of the quantification result because the coupling method differs widely from existing event tree method. (author)
Fully Depleted Charge-Coupled Devices
International Nuclear Information System (INIS)
We have developed fully depleted, back-illuminated CCDs that build upon earlier research and development efforts directed towards technology development of silicon-strip detectors used in high-energy-physics experiments. The CCDs are fabricated on the same type of high-resistivity, float-zone-refined silicon that is used for strip detectors. The use of high-resistivity substrates allows for thick depletion regions, on the order of 200-300 um, with corresponding high detection efficiency for near-infrared and soft x-ray photons. We compare the fully depleted CCD to the p-i-n diode upon which it is based, and describe the use of fully depleted CCDs in astronomical and x-ray imaging applications
International Nuclear Information System (INIS)
The recommended target dose in radioiodine therapy of solitary hyperfunctioning thyroid nodules is 300-400 Gy and therefore higher than in other radiotherapies. This is due to the fact that an unknown, yet significant portion of the activity is stored in extranodular areas but is neglected in the calculatory dosimetry. We investigate the feasibility of determining the ratio of nodular and extranodular activity concentrations (uptakes) from post-therapeutically acquired planar scintigrams with Monte Carlo simulations in GATE. The geometry of a gamma camera with a high energy collimator was emulated in GATE (Version 5). A geometrical thyroid-neck phantom (GP) and the ICRP reference voxel phantoms 'Adult Female' (AF, 16 ml thyroid) and 'Adult Male' (AM, 19 ml thyroid) were used as source regions. Nodules of 1 ml and 3 ml volume were placed in the phantoms. For each phantom and each nodule 200 scintigraphic acquisitions were simulated. Uptake ratios of nodule and rest of thyroid ranging from 1 to 20 could be created by summation. Quantitative image analysis was performed by investigating the number of simulated counts in regions of interest (ROIs). ROIs were created by perpendicular projection of the phantom onto the camera plane to avoid a user dependant bias. The ratio of count densities in ROIs over the nodule and over the contralateral lobe, which should be least affected by nodular activity, was taken to be the best available measure for the uptake ratios. However, the predefined uptake ratios are underestimated by these count density ratios: For an uptake ratio of 20 the count ratios range from 4.5 (AF, 1 ml nodule) to 15.3 (AM, 3 ml nodule). Furthermore, the contralateral ROI is more strongly affected by nodular activity than expected: For an uptake ratio of 20 between nodule and rest of thyroid up to 29% of total counts in the ROI over the contralateral lobe are caused by decays in the nodule (AF 3 ml). In the case of the 1 ml nodules this effect is smaller: 9
Energy Technology Data Exchange (ETDEWEB)
Hammes, Jochen; Schmidt, Matthias; Schicha, Harald; Eschner, Wolfgang [Universitaetsklinikum Koeln (Germany). Klinik und Poliklinik fuer Nuklearmedizin; Pietrzyk, Uwe [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Neurowissenschaften und Medizin (INM-4); Wuppertal Univ. (Germany). Fachbereich C - Physik
2011-07-01
The recommended target dose in radioiodine therapy of solitary hyperfunctioning thyroid nodules is 300-400 Gy and therefore higher than in other radiotherapies. This is due to the fact that an unknown, yet significant portion of the activity is stored in extranodular areas but is neglected in the calculatory dosimetry. We investigate the feasibility of determining the ratio of nodular and extranodular activity concentrations (uptakes) from post-therapeutically acquired planar scintigrams with Monte Carlo simulations in GATE. The geometry of a gamma camera with a high energy collimator was emulated in GATE (Version 5). A geometrical thyroid-neck phantom (GP) and the ICRP reference voxel phantoms 'Adult Female' (AF, 16 ml thyroid) and 'Adult Male' (AM, 19 ml thyroid) were used as source regions. Nodules of 1 ml and 3 ml volume were placed in the phantoms. For each phantom and each nodule 200 scintigraphic acquisitions were simulated. Uptake ratios of nodule and rest of thyroid ranging from 1 to 20 could be created by summation. Quantitative image analysis was performed by investigating the number of simulated counts in regions of interest (ROIs). ROIs were created by perpendicular projection of the phantom onto the camera plane to avoid a user dependant bias. The ratio of count densities in ROIs over the nodule and over the contralateral lobe, which should be least affected by nodular activity, was taken to be the best available measure for the uptake ratios. However, the predefined uptake ratios are underestimated by these count density ratios: For an uptake ratio of 20 the count ratios range from 4.5 (AF, 1 ml nodule) to 15.3 (AM, 3 ml nodule). Furthermore, the contralateral ROI is more strongly affected by nodular activity than expected: For an uptake ratio of 20 between nodule and rest of thyroid up to 29% of total counts in the ROI over the contralateral lobe are caused by decays in the nodule (AF 3 ml). In the case of the 1 ml nodules this
Borrmann, Robin
2010-01-01
This article examines whether the use of Depleted Uranium (DU) munitions can be considered illegal under current public international law. The analysis covers the law of arms control and focuses in particular on international humanitarian law. The article argues that DU ammunition cannot be addressed adequately under existing treaty based weapon bans, such as the Chemical Weapons Convention, due to the fact that DU does not meet the criteria required to trigger the applicability of those treaties. Furthermore, it is argued that continuing uncertainties regarding the effects of DU munitions impedes a reliable review of the legality of their use under various principles of international law, including the prohibition on employing indiscriminate weapons; the prohibition on weapons that are intended, or may be expected, to cause widespread, long-term and severe damage to the natural environment; and the prohibition on causing unnecessary suffering or superfluous injury. All of these principles require complete knowledge of the effects of the weapon in question. Nevertheless, the author argues that the same uncertainty places restrictions on the use of DU under the precautionary principle. The paper concludes with an examination of whether or not there is a need for--and if so whether there is a possibility of achieving--a Convention that comprehensively outlaws the use, transfer and stockpiling of DU weapons, as proposed by some non-governmental organisations (NGOs).
Proton Upset Monte Carlo Simulation
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Ozone-depleting Substances (ODS)
U.S. Environmental Protection Agency — This site includes all of the ozone-depleting substances (ODS) recognized by the Montreal Protocol. The data include ozone depletion potentials (ODP), global...
International Nuclear Information System (INIS)
The course of ''Monte Carlo Techniques'' will try to give a general overview of how to build up a method based on a given theory, allowing you to compare the outcome of an experiment with that theory. Concepts related with the construction of the method, such as, random variables, distributions of random variables, generation of random variables, random-based numerical methods, will be introduced in this course. Examples of some of the current theories in High Energy Physics describing the e+e- annihilation processes (QED, Electro-Weak, QCD) will also be briefly introduced. A second step in the employment of this method is related to the detector. The interactions that a particle could have along its way, through the detector as well as the response of the different materials which compound the detector will be quoted in this course. An example of detector at LEP era, in which these techniques are being applied, will close the course. (orig.)
Directory of Open Access Journals (Sweden)
J. D. Rösevall
2007-01-01
Full Text Available The objective of this study is to demonstrate how polar ozone depletion can be mapped and quantified by assimilating ozone data from satellites into the wind driven transport model DIAMOND, (Dynamical Isentropic Assimilation Model for OdiN Data. By assimilating a large set of satellite data into a transport model, ozone fields can be built up that are less noisy than the individual satellite ozone profiles. The transported fields can subsequently be compared to later sets of incoming satellite data so that the rates and geographical distribution of ozone depletion can be determined. By tracing the amounts of solar irradiation received by different air parcels in a transport model it is furthermore possible to study the photolytic reactions that destroy ozone. In this study, destruction of ozone that took place in the Antarctic winter of 2003 and in the Arctic winter of 2002/2003 have been examined by assimilating ozone data from the ENVISAT/MIPAS and Odin/SMR satellite-instruments. Large scale depletion of ozone was observed in the Antarctic polar vortex of 2003 when sunlight returned after the polar night. By mid October ENVISAT/MIPAS data indicate vortex ozone depletion in the ranges 80–100% and 70–90% on the 425 and 475 K potential temperature levels respectively while the Odin/SMR data indicates depletion in the ranges 70–90% and 50–70%. The discrepancy between the two instruments has been attributed to systematic errors in the Odin/SMR data. Assimilated fields of ENVISAT/MIPAS data indicate ozone depletion in the range 10–20% on the 475 K potential temperature level, (~19 km altitude, in the central regions of the 2002/2003 Arctic polar vortex. Assimilated fields of Odin/SMR data on the other hand indicate ozone depletion in the range 20–30%.
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
International Nuclear Information System (INIS)
The amount of stratospheric ozone and the reduction of the ozone layer vary according to seasons and latitudes. At present total and vertical ozone is monitored over all Austria. The mean monthly ozone levels between 1994 and 2000 are presented. Data on stratospheric ozone and UV-B radiation are published daily on the home page http: www.lebesministerium.at. The use of ozone depleting substances such as chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs) is provided. Besides, the national measures taken to reduce their use. Figs. 2, Tables 2. (nevyjel)
Development of burnup calculation function in reactor Monte Carlo code RMC
International Nuclear Information System (INIS)
This paper presents the burnup calculation capability of RMC, which is a new Monte Carlo (MC) neutron transport code developed by Reactor Engineering Analysis Laboratory (REAL) in Tsinghua University of China. Unlike most of existing MC depletion codes which explicitly couple the depletion module, RMC incorporates ORIGEN 2.1 in an implicit way. Different burn step strategies, including the middle-of-step approximation and the predictor-corrector method, are adopted by RMC to assure the accuracy under large burnup step size. RMC employs a spectrum-based method of tallying one-group cross section, which can considerably saves computational time with negligible accuracy loss. According to the validation results of benchmarks and examples, it is proved that the burnup function of RMC performs quite well in accuracy and efficiency. (authors)
Development and validation of burnup function in reactor Monte Carlo RMC
International Nuclear Information System (INIS)
This paper presents the burnup calculation capability of RMC, which is a new Monte Carlo (MC) neutron transport code developed by Reactor Engineering Analysis Laboratory (REAL) in Tsinghua University of China. Unlike most of existing MC depletion codes which explicitly couple the depletion module, RMC incorporates ORIGEN 2.1 in an implicit way. Different burn step strategies, including middle-of-step approximation and predictor-corrector method, are adopted by RMC to assure accuracy under large step size. RMC employs a spectrum-based method of tallying one-group cross section, which can considerably save computational time with negligible accuracy loss. According to validation results of benchmarks and examples, it is proved that the burnup function of RMC performs quite well in accuracy and efficiency. (author)
Monte Carlo simulation on kinetic behavior of one-pot hyperbranched polymerization based on AA*+CB2
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Monte Carlo simulation was applied to investigate the kinetic behavior of AA*+CB2 system.The algorithm consisted of two procedures to simulate the in-situ synthesis of AB2-like intermediate and the subsequent polymerization,respectively.In order to improve the accuracy of the prediction,the mobility distinction between different scale molecules in polymerization was taken into account by relating the reaction rate constants to the collision possibility of each pair of species.The feed ratio of initial monomers and the activity difference between the two functional groups within AA* were studied systematically to catch the essential features of the reaction.Simulation results have revealed that the achievable maximum conversion primarily depends on the extent of the reactivity difference between A and A*-groups,and it is suggested that A*-group should be at least 10 times more active than A-group to achieve high number-average degree of polymerization.
Institute of Scientific and Technical Information of China (English)
Zhang Zhi-Dong; Chang Chun-Rui; Ma Dong-Lai
2009-01-01
Hybrid nematic films have been studied by Monte Carlo simulations using a lattice spin model,in which the pair potential is spatially anisotropic and dependent on elastic constants of liquid crystals.We confirm in the thin hybrid nematic film the existence of a biaxially nonbent structure and the structarc transition from the biaxial to the bent-director structure,which is similar to the result obtained using the Lebwohl-Lasher model.However,the step-like director's profile,characteristic for the biaxial structure,is spatially asymmetric in the film because the pair potential leads to K1≠K3.We estimate the upper cell thickness to be 69 spin layers,in which the biaxial structure can be found.
A GPU-based Large-scale Monte Carlo Simulation Method for Systems with Long-range Interactions
Liang, Yihao; Li, Yaohang
2016-01-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures. It adopts the sequential updating scheme of Metropolis algorithm, and makes no approximation in the computation of energy. It reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We use this method to simulate primitive model electrolytes. We measure very precisely all ion-ion pair correlation functions at high concentrations, and extract renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
Fujii, K; Nomura, K; Muramatsu, Y; Takahashi, K; Obara, S; Akahane, K; Satake, M
2015-07-01
The aim of this study was to validate the computed tomography dose index (CTDI) and organ doses evaluated by Monte Carlo simulations through comparisons with doses evaluated by in-phantom dosimetry. Organ doses were measured with radio-photoluminescence glass dosemeter (RGD) set at various organ positions within adult and 1-y-old anthropomorphic phantoms. For the dose simulations, the X-ray spectrum and bow-tie filter shape of a CT scanner were estimated and 3D voxelised data of the CTDI and anthropomorphic phantoms from the acquired CT images were derived. Organ dose simulations and measurements were performed with chest and abdomen-pelvis CT examination scan parameters. Relative differences between the simulated and measured doses were within 5 % for the volume CTDI and 13 % for organ doses for organs within the scan range in adult and paediatric CT examinations. The simulation results were considered to be in good agreement with the measured doses. PMID:25848103
Influencing factors of dose equivalence for X and γ rays with different energy based on Monte Carlo
International Nuclear Information System (INIS)
Background: The accuracy of dosimeter measurement of X and γ rays needs to be resolved. Purpose: The aim is to study the correction term of the equivalent process of low energy X-ray and the natural radioactive source. Methods: Instead of the standard sources, X-ray machine was adopted on the dose instrument calibration. The influence factors of the equivalence between low-energy X-ray and high-energy X or γ rays were simulated using Monte Carlo (MCNP) software. Results: The influences of distance, space scattering, response of detector on dose equivalence were obtained. The simulation results were also analyzed. Conclusion: The method can be used in dose equivalent correction for low-energy X-ray, high-energy X or γ rays, which is significant for the widespread use of X rays. (authors)
Meshkian, Mohsen
2016-02-01
Neutron radiography is rapidly extending as one of the methods for non-destructive screening of materials. There are various parameters to be studied for optimising imaging screens and image quality for different fast-neutron radiography systems. Herein, a Geant4 Monte Carlo simulation is employed to evaluate the response of a fast-neutron radiography system using a 252Cf neutron source. The neutron radiography system is comprised of a moderator as the neutron-to-proton converter with suspended silver-activated zinc sulphide (ZnS(Ag)) as the phosphor material. The neutron-induced protons deposit energy in the phosphor which consequently emits scintillation light. Further, radiographs are obtained by simulating the overall radiography system including source and sample. Two different standard samples are used to evaluate the quality of the radiographs.
Zagrebin, M. A.; Sokolovskiy, V. V.; Buchelnikov, V. D.
2016-09-01
Structural, magnetic and electronic properties of stoichiometric Co2 YZ Heusler alloys (Y = Cr, Fe, Mn and Z = Al, Si, Ge) have been studied by means of ab initio calculations and Monte Carlo simulations. The investigations were performed in dependence on different levels of approximations in DFT (FP and ASA modes, as well as GGA and GGA + U schemes) and external pressure. It is shown that in the case of the GGA scheme the half-metallic behavior is clearly observed for compounds containing Cr and Mn transition metals, while Co2FeZ alloys demonstrate the pseudo half-metallic behavior. It is demonstrated that an applied pressure and an account of Coulomb repulsion (U) lead to the stabilization of the half-metallic nature for Co2 YZ alloys. The strongest ferromagnetic inter-sublattice (Co–Y) interactions together with intra-sublattice (Co–Co and Y–Y) interactions explain the high values of the Curie temperature obtained by Monte Carlo simulations using the Heisenberg model. It is observed that a decrease in valence electrons of Y atoms (i.e. Fe substitution by Mn and Cr) leads to the weakening of the exchange interactions and to the reduction of the Curie temperature. Besides, in the case of the FP mode Curie temperatures were found in a good agreement with available experimental and theoretical data, where the latter were obtained by applying the empirical relation between the Curie temperature and the total magnetic moment.
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Monte Carlo Code System Development for Liquid Metal Reactor
Energy Technology Data Exchange (ETDEWEB)
Kim, Chang Hyo; Shim, Hyung Jin; Han, Beom Seok; Park, Ho Jin; Park, Dong Gyu [Seoul National University, Seoul (Korea, Republic of)
2007-03-15
We have implemented the composition cell class and the use cell to MCCARD for hierarchy input processing. For the inputs of KALlMER-600 core consisted of 336 assemblies, we require the geometric data of 91,056 pin cells. Using hierarchy input processing, it was observed that the system geometries are correctly handled with the geometric data of total 611 cells; 2 cells for fuel rods, 2 cells for guide holes, 271 translation cells for rods, and 336 translation cells for assemblies. We have developed monte carlo decay-chain models based on decay chain model of REBUS code for liquid metal reactor analysis. Using developed decay-chain models, the depletion analysis calculations have performed for the homogeneous and heterogeneous model of KALlMER-600. The k-effective for the depletion analysis agrees well with that of REBUS code. and the developed decay chain models shows more efficient performance for time and memories, as compared with the existing decay chain model The chi-square criterion has been developed to diagnose the temperature convergence for the MC TjH feedback calculations. From the application results to the KALlMER pin and fuel assembly problem, it is observed that the new criterion works well Wc have applied the high efficiency variance reduction technique by splitting Russian roulette to estimate the PPPF of the KALIMER core at BOC. The PPPF of KALlMER core at BOC is 1.235({+-}0.008). The developed technique shows four time faster calculation, as compared with the existin2 calculation Subject Keywords Monte Carlo
Localization Algorithm of Ranging Sensor Networks Based on Monte Carlo%基于Monte Carlo的测距传感网络定位算法
Institute of Scientific and Technical Information of China (English)
2013-01-01
针对现有蒙特卡罗定位存在的一些应用缺陷，提出一种基于ZigBee传感网测距的蒙特卡罗定位算法。介绍了改进算法的实现步骤，该方法在定位时获取多个外部信息，同时将定位样本历史信息应用到位置估计中，且在ZigBee室内测距的特点上加入了改进的高斯滤波算法并与均值滤波进行对比。测试结果表明，该算法较原有算法取得了明显的改进优势。%The Monte Carlo localization algorithm based on ZigBee sensor network ranging is proposed to improve Monte Carlo application defect. Introduce the implement steps of improved algorithm. The method acquired many exterior information when in location, then use the location sample history information to position estimation. Add improved Gaussian filter algorithm into ZigBee indoor ranging feature and compare it with mean value. The test results show that the method is significantly improved advantage.
,
2015-01-01
We present a sophisticated likelihood reconstruction algorithm for shower-image analysis of imaging Cherenkov telescopes. The reconstruction algorithm is based on the comparison of the camera pixel amplitudes with the predictions from a Monte Carlo based model. Shower parameters are determined by a maximisation of a likelihood function. Maximisation of the likelihood as a function of shower fit parameters is performed using a numerical non-linear optimisation technique. A related reconstruction technique has already been developed by the CAT and the H.E.S.S. experiments, and provides a more precise direction and energy reconstruction of the photon induced shower compared to the second moment of the camera image analysis. Examples are shown of the performance of the analysis on simulated gamma-ray data from the VERITAS array.
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
International Nuclear Information System (INIS)
Depleted Uranium (DU) is the waste product of uranium enrichment from the manufacturing of fuel rods for nuclear reactors in nuclear power plants and nuclear power ships. DU may also results from the reprocessing of spent nuclear reactor fuel. Potentially DU has both chemical and radiological toxicity with two important targets organs being the kidney and the lungs. DU is made into a metal and, due to its availability, low price, high specific weight, density and melting point as well as its pyrophoricity; it has a wide range of civilian and military applications. Due to the use of DU over the recent years, there appeared in some press on health hazards that are alleged to be due to DU. In these paper properties, applications, potential environmental and health effects of DU are briefly reviewed
Depleted uranium: Metabolic disruptor?
International Nuclear Information System (INIS)
The presence of uranium in the environment can lead to long-term contamination of the food chain and of water intended for human consumption and thus raises many questions about the scientific and societal consequences of this exposure on population health. Although the biological effects of chronic low-level exposure are poorly understood, results of various recent studies show that contamination by depleted uranium (DU) induces subtle but significant biological effects at the molecular level in organs including the brain, liver, kidneys and testicles. For the first time, it has been demonstrated that DU induces effects on several metabolic pathways, including those metabolizing vitamin D, cholesterol, steroid hormones, acetylcholine and xenobiotics. This evidence strongly suggests that DU might well interfere with many metabolic pathways. It might thus contribute, together with other man-made substances in the environment, to increased health risks in some regions. (authors)
A worldwide view of groundwater depletion
van Beek, L. P.; Wada, Y.; van Kempen, C.; Reckman, J. W.; Vasak, S.; Bierkens, M. F.
2010-12-01
During the last decades, global water demand has increased two-fold due to increasing population, expanding irrigated area and economic development. Globally such demand can be met by surface water availability (i.e., water in rivers, lakes and reservoirs) but regional variations are large and the absence of sufficient rainfall and run-off increasingly encourages the use of groundwater resources, particularly in the (semi-)arid regions of the world. Excessive abstraction for irrigation frequently leads to overexploitation, i.e. if groundwater abstraction exceeds the natural groundwater recharge over extensive areas and prolonged times, persistent groundwater depletion may occur. Observations and various regional studies have revealed that groundwater depletion is a substantial issue in regions such as Northwest India, Northeast Pakistan, Central USA, Northeast China and Iran. Here we provide a global overview of groundwater depletion from the year 1960 to 2000 at a spatial resolution of 0.5 degree by assessing groundwater recharge with the global hydrological model PCR-GLOBWB and subtracting estimates of groundwater abstraction obtained from IGRAC-GGIS database. PCR-GLOBWB was forced by the CRU climate dataset downscaled to daily time steps using ERA40 re-analysis data. PCR-GLOBWB simulates daily global groundwater recharge (0.5 degree) while considering sub-grid variability of each grid cell (e.g., short and tall vegetation, different soil types, fraction of saturated soil). Country statistics of groundwater abstraction were downscaled to 0.5 degree by using water demand (i.e., agriculture, industry and domestic) as a proxy. To limit problems related to increased capture of discharge and increased recharge due to groundwater pumping, we restricted our analysis to sub-humid to arid areas. The uncertainty in the resulting estimates was assessed by a Monte Carlo analysis of 100 realizations of groundwater recharge and 100 realizations of groundwater abstraction
Künzler, Thomas; Fotina, Irina; Stock, Markus; Georg, Dietmar
2009-12-01
The dosimetric performance of a Monte Carlo algorithm as implemented in a commercial treatment planning system (iPlan, BrainLAB) was investigated. After commissioning and basic beam data tests in homogenous phantoms, a variety of single regular beams and clinical field arrangements were tested in heterogeneous conditions (conformal therapy, arc therapy and intensity-modulated radiotherapy including simultaneous integrated boosts). More specifically, a cork phantom containing a concave-shaped target was designed to challenge the Monte Carlo algorithm in more complex treatment cases. All test irradiations were performed on an Elekta linac providing 6, 10 and 18 MV photon beams. Absolute and relative dose measurements were performed with ion chambers and near tissue equivalent radiochromic films which were placed within a transverse plane of the cork phantom. For simple fields, a 1D gamma (γ) procedure with a 2% dose difference and a 2 mm distance to agreement (DTA) was applied to depth dose curves, as well as to inplane and crossplane profiles. The average gamma value was 0.21 for all energies of simple test cases. For depth dose curves in asymmetric beams similar gamma results as for symmetric beams were obtained. Simple regular fields showed excellent absolute dosimetric agreement to measurement values with a dose difference of 0.1% ± 0.9% (1 standard deviation) at the dose prescription point. A more detailed analysis at tissue interfaces revealed dose discrepancies of 2.9% for an 18 MV energy 10 × 10 cm2 field at the first density interface from tissue to lung equivalent material. Small fields (2 × 2 cm2) have their largest discrepancy in the re-build-up at the second interface (from lung to tissue equivalent material), with a local dose difference of about 9% and a DTA of 1.1 mm for 18 MV. Conformal field arrangements, arc therapy, as well as IMRT beams and simultaneous integrated boosts were in good agreement with absolute dose measurements in the
ROESSEL, ROBERT A., JR.
THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…
DEFF Research Database (Denmark)
Holm, Bent
2005-01-01
En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità.......En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità....
Directory of Open Access Journals (Sweden)
J. Tonttila
2013-08-01
Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.
A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF)
Hansson, Marie; Isaksson, Mats
2007-04-01
X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.
Antolínez, Alfonso; Rapisarda, David
2016-07-01
Fission chambers have become one of the main devices for the measurement of neutron fluxes in nuclear facilities; including fission reactors, future fusion ones, spallation sources, etc. The main goal of a fission chamber is to estimate the neutron flux inside the facility, as well as instantaneous changes in the irradiation conditions. A Monte Carlo Fission Chamber Designer (MCFCD) has been developed in order to assist engineers in the complete design cycle of the fission chambers. So far MCFCD focuses on the most important neutron reactions taking place in a thermal nuclear reactor. A theoretical model describing the most important outcomes in fission chambers design has been developed, including the expected electrical signals (current intensity and drop in potential) and, current-polarization voltage characteristics (sensitivity and saturation plateau); the saturation plateau is the zone of the saturation curve where the output current is proportional to fission rate; fission chambers work in this region. Data provided by MCFCD are in good agreement with measurements available.
Directory of Open Access Journals (Sweden)
J. Tonttila
2013-02-01
Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid-cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. This provides a consistent way for simulating the cloud radiative effects with two-moment cloud microphysical properties defined in subgrid-scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. This promotes changes in the global distribution of the cloud radiative effects and might thus have implications on model estimation of the indirect radiative effect of aerosols.
Maiti, Saumen; Tiwari, R. K.
2009-11-01
Identification of rock boundaries and structural features from well log response is a fundamental problem in geological field studies. However, in a complex geologic situation, such as in the presence of crystalline rocks where metamorphisms lead to facies changes, it is not easy to discern accurate information from well log data using conventional artificial neural network (ANN) methods. Moreover inferences drawn by such methods are also found to be ambiguous because of the strong overlapping of well log signals, which are generally tainted with deceptive noise. Here, we have developed an alternative ANN approach based on Bayesian statistics using the concept of Hybrid Monte Carlo (HMC)/Markov Chain Monte Carlo (MCMC) inversion scheme for modeling the German Continental Deep Drilling Program (KTB) well log data. MCMC algorithm draws an independent and identically distributed (i.i.d) sample by Markov Chain simulation technique from posterior probability distribution using the principle of statistical mechanics in Hamiltonian dynamics. In this algorithm, each trajectory is updated by approximating the Hamiltonian differential equations through a leapfrog discrimination scheme. We examined the stability and efficiency of the HMC-based approach on “noisy” data assorted with different levels of colored noise. We also perform uncertainty analysis by estimating standard deviation (STD) error map of a posteriori covariance matrix at the network output of three types of lithofacies over the entire length of the litho section of KTB. Our analyses demonstrate that the HMC-based approach renders robust means for classification of complex lithofacies successions from the KTB borehole noisy signals, and hence may provide a useful guide for understanding the crustal inhomogeneity and structural discontinuity in many other tectonically critical and complex regions.
Energy Technology Data Exchange (ETDEWEB)
Chen, X; Xing, L; Luxton, G; Bush, K [Stanford University, Palo Alto, CA (United States); Azcona, J [Clinica Universidad de Navarra, Pamplona (Spain)
2014-06-01
Purpose: Patient-specific QA for VMAT is incapable of providing full 3D dosimetric information and is labor intensive in the case of severe heterogeneities or small-aperture beams. A cloud-based Monte Carlo dose reconstruction method described here can perform the evaluation in entire 3D space and rapidly reveal the source of discrepancies between measured and planned dose. Methods: This QA technique consists of two integral parts: measurement using a phantom containing array of dosimeters, and a cloud-based voxel Monte Carlo algorithm (cVMC). After a VMAT plan was approved by a physician, a dose verification plan was created and delivered to the phantom using our Varian Trilogy or TrueBeam system. Actual delivery parameters (i.e., dose fraction, gantry angle, and MLC at control points) were extracted from Dynalog or trajectory files. Based on the delivery parameters, the 3D dose distribution in the phantom containing detector were recomputed using Eclipse dose calculation algorithms (AAA and AXB) and cVMC. Comparison and Gamma analysis is then conducted to evaluate the agreement between measured, recomputed, and planned dose distributions. To test the robustness of this method, we examined several representative VMAT treatments. Results: (1) The accuracy of cVMC dose calculation was validated via comparative studies. For cases that succeeded the patient specific QAs using commercial dosimetry systems such as Delta- 4, MAPCheck, and PTW Seven29 array, agreement between cVMC-recomputed, Eclipse-planned and measured doses was obtained with >90% of the points satisfying the 3%-and-3mm gamma index criteria. (2) The cVMC method incorporating Dynalog files was effective to reveal the root causes of the dosimetric discrepancies between Eclipse-planned and measured doses and provide a basis for solutions. Conclusion: The proposed method offers a highly robust and streamlined patient specific QA tool and provides a feasible solution for the rapidly increasing use of VMAT
The Toxicity of Depleted Uranium
Wayne Briner
2010-01-01
Depleted uranium (DU) is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a c...
Depleted zinc: Properties, application, production
International Nuclear Information System (INIS)
The addition of ZnO, depleted in the Zn-64 isotope, to the water of boiling water nuclear reactors lessens the accumulation of Co-60 on the reactor interior surfaces, reduces radioactive wastes and increases the reactor service-life because of the inhibitory action of zinc on inter-granular stress corrosion cracking. To the same effect depleted zinc in the form of acetate dihydrate is used in pressurized water reactors. Gas centrifuge isotope separation method is applied for production of depleted zinc on the industrial scale. More than 20 years of depleted zinc application history demonstrates its benefits for reduction of NPP personnel radiation exposure and combating construction materials corrosion.
Depleted zinc: Properties, application, production.
Borisevich, V D; Pavlov, A V; Okhotina, I A
2009-01-01
The addition of ZnO, depleted in the Zn-64 isotope, to the water of boiling water nuclear reactors lessens the accumulation of Co-60 on the reactor interior surfaces, reduces radioactive wastes and increases the reactor service-life because of the inhibitory action of zinc on inter-granular stress corrosion cracking. To the same effect depleted zinc in the form of acetate dihydrate is used in pressurized water reactors. Gas centrifuge isotope separation method is applied for production of depleted zinc on the industrial scale. More than 20 years of depleted zinc application history demonstrates its benefits for reduction of NPP personnel radiation exposure and combating construction materials corrosion.
Energy Technology Data Exchange (ETDEWEB)
Baba, Justin S [ORNL; Koju, Vijay [ORNL; John, Dwayne O [ORNL
2016-01-01
The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scattering sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.
Energy Technology Data Exchange (ETDEWEB)
Idiri, Z. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria)]. E-mail: zmidiri@yahoo.fr; Mazrou, H. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria); Beddek, S. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria); Amokrane, A. [Faculte de Physique, Universite des Sciences et de la Technologie Houari-Boumediene (USTHB), Alger (Algeria); Azbouche, A. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria)
2007-07-21
The present paper describes the optimization of sample dimensions of a {sup 241}Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.
DEFF Research Database (Denmark)
Strunk, Astrid; Knudsen, Mads Faurschou; Larsen, Nicolaj Krog;
investigate the landscape history in eastern and western Greenland by applying a novel Markov Chain Monte Carlo (MCMC) inversion approach to the existing 10Be-26Al data from these regions. The new MCMC approach allows us to constrain the most likely landscape history based on comparisons between simulated...... the landscape history in previously glaciated terrains may be difficult, however, due to unknown erosion rates and the presence of inherited nuclides. The potential use of cosmogenic nuclides in landscapes with a complex history of exposure and erosion is therefore often quite limited. In this study, we...... and measured cosmogenic nuclide concentrations. It is a fundamental assumption of the model approach that the exposure history at the site/location can be divided into two distinct regimes: i) interglacial periods characterized by zero shielding due to overlying ice and a uniform interglacial erosion rate...
International Nuclear Information System (INIS)
Aim of this work was to perform a rough preliminary evaluation of the burn-up of the fuel of TRIGA Mark II research reactor of the Applied Nuclear Energy Laboratory (LENA) of the Univ. of Pavia. In order to achieve this goal a computation of the neutron flux density in each fuel element was performed by means of Monte Carlo code MCNP (Version 4C). The results of the simulations were used to calculate the effective cross sections (fission and capture) inside fuel and, at the end, to evaluate the burn-up and the uranium consumption in each fuel element. The evaluation, showed a fair agreement with the computation for fuel burn-up based on the total energy released during reactor operation. (authors)
A Novel Depletion-Mode MOS Gated Emitter Shorted Thyristor
Institute of Scientific and Technical Information of China (English)
张鹤鸣; 戴显英; 张义门; 马晓华; 林大松
2000-01-01
A Novel MOS-gated thyristor, depletion-mode MOS gated emitter shorted thyristor (DMST),and its two structures are proposed. In DMST,the channel of depletion-mode MOS makes the thyristor emitter-based junction inherently short. The operation of the device is controlled by the interruption and recovery of the depletion-mode MOS P channel. The perfect properties have been demonstrated by 2-D numerical simulations and the tests on the fabricated chips.
International Nuclear Information System (INIS)
The two alloy systems: namely, Ni-Mo-based alloys and Al-Ti alloys, share some common features in that the ordered structures and the ordering processes in these two systems can be described in terms of three types of superlattice tiles: squares and fat or lean rhombi. In Ni- Mo-based alloys these represent one-molecule clusters of three fcc superlattice structures: Ni4Mo (D1a), Ni3Mo (D022) and Ni2Mo (Pt2Mo-type), while in Al-Ti these represent two dimensional Ti4AI, Ti3Al and Ti2Al derivatives on Ti-rich (002) planes of the off stoichiometric TiAl (L10) phase. Evolution of short range order (SRO): 11/20 special point SRO in the case of Ni-Mo and the incommensurate SRO in the case of the Al-rich TiAl intermetallic alloys and evolution of LRO phases from these have been followed using both conventional and high resolution TEM. Corroborative evidence from Monte Carlo simulations will also be presented in order to explain the observed experimental results. Occurrence of antiphase boundaries (APBs) and their energies, as we will see, play an important role in these transformations. Predominantly two types of APBs occur in the Al5Ti3 phase in Al-rich TiAl. Monte Carlo Simulations and the experimental observations reveal both of these. These play a synergistic role in the formation of Al5Ti3 antiphase domains
International Nuclear Information System (INIS)
The advent of fast scintillators yielding great light yield and/or stopping power, along with advances in photomultiplier tubes and electronics, have rekindled interest in time-of-flight (TOF) PET. Because the potential performance improvements offered by TOF PET are substantial, efforts to improve PET timing should prove very fruitful. In this study, we performed Monte Carlo simulations to explore what gains in PET performance could be achieved if the coincidence resolving time (CRT) in the LYSO-based PET component of Discovery RX PET/CT scanner were improved. For this purpose, the GATE Monte Carlo package was utilized, providing the ability to model and characterize various physical phenomena in PET imaging. For the present investigation, count rate performance and signal to noise ratio (SNR) values in different activity concentrations were simulated for different coincidence timing windows of 4, 5.85, 6, 6.5, 8, 10 and 12 ns and with different CRTs of 100-900 ps FWHM involving 50 ps FWHM increments using the NEMA scatter phantom. Strong evidence supporting robustness of the simulations was found as observed in the good agreement between measured and simulated data for the cases of estimating axial sensitivity, axial and transaxial detection position, gamma non-collinearity angle distribution and positron annihilation distance. In the non-TOF context, the results show that the random event rate can be reduced by using narrower coincidence timing window widths, demonstrating considerable enhancements in the peak noise equivalent count rate (NECR) performance. The peak NECR had increased by ∼50% when utilizing the coincidence window width of 4 ns. At the same time, utilization of TOF information resulted in improved NECR and SNR with the dramatic reduction of random coincidences as a function of CRT. For example, with CRT of 500 ps FWHM, a factor of 2.3 reduction in random rates, factor of 1.5 increase in NECR and factor of 2.1 improvement in SNR is achievable
Neural Adaptive Sequential Monte Carlo
Gu, Shixiang; Ghahramani, Zoubin; Turner, Richard E
2015-01-01
Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance sampling-based methods, performance is critically dependent on the proposal distribution: a bad proposal can lead to arbitrarily inaccurate estimates of the target distribution. This paper presents a new method for automatically adapting the proposal using an approximation of the Ku...
The Toxicity of Depleted Uranium
Directory of Open Access Journals (Sweden)
Wayne Briner
2010-01-01
Full Text Available Depleted uranium (DU is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a clear and defined set of symptoms. Chronic low-dose, or subacute, exposure to depleted uranium alters the appearance of milestones in developing organisms. Adult animals that were exposed to depleted uranium during development display persistent alterations in behavior, even after cessation of depleted uranium exposure. Adult animals exposed to depleted uranium demonstrate altered behaviors and a variety of alterations to brain chemistry. Despite its reduced level of radioactivity evidence continues to accumulate that depleted uranium, if ingested, may pose a radiologic hazard. The current state of knowledge concerning DU is discussed.
Depletable resources and the economy.
Heijman, W.J.M.
1991-01-01
The subject of this thesis is the depletion of scarce resources. The main question to be answered is how to avoid future resource crises. After dealing with the complex relation between nature and economics, three important concepts in relation with resource depletion are discussed: steady state, ti
International Nuclear Information System (INIS)
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
Energy Technology Data Exchange (ETDEWEB)
Bieda, Bogusław
2014-05-01
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. - Highlights: • The benefits of Monte Carlo simulation are examined. • The normal probability distribution is studied. • LCI data on Mittal Steel Poland (MSP) complex in Kraków, Poland dates back to 2005. • This is the first assessment of the LCI uncertainties in the Polish steel industry.
Monte Carlo Radiative Transfer
Whitney, Barbara A
2011-01-01
I outline methods for calculating the solution of Monte Carlo Radiative Transfer (MCRT) in scattering, absorption and emission processes of dust and gas, including polarization. I provide a bibliography of relevant papers on methods with astrophysical applications.
International Nuclear Information System (INIS)
Models of fermions interacting with classical degrees of freedom are applied to a large variety of systems in condensed matter physics. For this class of models, Weiße [Phys. Rev. Lett. 102, 150604 (2009)] has recently proposed a very efficient numerical method, called O(N) Green-Function-Based Monte Carlo (GFMC) method, where a kernel polynomial expansion technique is used to avoid the full numerical diagonalization of the fermion Hamiltonian matrix of size N, which usually costs O(N3) computational complexity. Motivated by this background, in this paper we apply the GFMC method to the double exchange model in three spatial dimensions. We mainly focus on the implementation of GFMC method using both MPI on a CPU-based cluster and Nvidia's Compute Unified Device Architecture (CUDA) programming techniques on a GPU-based (Graphics Processing Unit based) cluster. The time complexity of the algorithm and the parallel implementation details on the clusters are discussed. We also show the performance scaling for increasing Hamiltonian matrix size and increasing number of nodes, respectively. The performance evaluation indicates that for a 323 Hamiltonian a single GPU shows higher performance equivalent to more than 30 CPU cores parallelized using MPI
Monte Carlo transition probabilities
Lucy, L. B.
2001-01-01
Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...
MCNPX Monte Carlo burnup simulations of the isotope correlation experiments in the NPP Obrigheim
Energy Technology Data Exchange (ETDEWEB)
Cao Yan, E-mail: ycao@anl.go [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Gohar, Yousry [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Broeders, Cornelis H.M. [Forschungszentrum Karlsruhe, Institute for Neutron Physics and Reactor Technology, P.O. Box 3640, 76021 Karlsruhe (Germany)
2010-10-15
This paper describes the simulation work of the Isotope Correlation Experiment (ICE) using the MCNPX Monte Carlo computer code package. The Monte Carlo simulation results are compared with the ICE-Experimental measurements for burnup up to 30 GWD/t. The comparison shows the good capabilities of the MCNPX computer code package for predicting the depletion of the uranium fuel and the buildup of the plutonium isotopes in a PWR thermal reactor. The Monte Carlo simulation results show also good agreements with the experimental data for calculating several long-lived and stable fission products. However, for the americium and curium actinides, it is difficult to judge the predication capabilities for these actinides due to the large uncertainties in the ICE-Experimental data. In the MCNPX numerical simulations, a pin cell model is utilized to simulate the fuel lattice of the nuclear power reactor. Temperature dependent libraries based on JEFF3.1 nuclear data files are utilized for the calculations. In addition, temperature dependent libraries based ENDF/B-VII nuclear data files are utilized and the obtained results are very close to the JEFF3.1 results, except for {approx}10% differences in the prediction of the minor actinide isotopes buildup.
Enhancements in Continuous-Energy Monte Carlo Capabilities for SCALE 6.2
Energy Technology Data Exchange (ETDEWEB)
Rearden, Bradley T [ORNL; Petrie Jr, Lester M [ORNL; Peplow, Douglas E. [ORNL; Bekar, Kursat B [ORNL; Wiarda, Dorothea [ORNL; Celik, Cihangir [ORNL; Perfetti, Christopher M [ORNL; Dunn, Michael E [ORNL
2014-01-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, industry, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a plug-and-play framework that includes three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 provides several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, sensitivity and uncertainty analysis, and improved fidelity in nuclear data libraries. A brief overview of SCALE capabilities is provided with emphasis on new features for SCALE 6.2.
Enhancements in continuous-energy Monte Carlo capabilities for SCALE 6.2
International Nuclear Information System (INIS)
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, industry, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 provides several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity/uncertainty analysis, as well as improved fidelity in nuclear data libraries. A brief overview of SCALE capabilities is provided with emphasis on new features for SCALE 6.2. (author)
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Directory of Open Access Journals (Sweden)
P. Li
2013-01-01
Full Text Available The growth of global population and economy continually increases the waste volumes and consequently creates challenges to handle and dispose solid wastes. It becomes more challenging in mixed rural-urban areas (i.e., areas of mixed land use for rural and urban purposes where both agricultural waste (e.g., manure and municipal solid waste are generated. The efficiency and confidence of decisions in current management practices significantly rely on the accurate information and subjective judgments, which are usually compromised by uncertainties. This study proposed a resource-oriented solid waste management system for mixed rural-urban areas. The system is featured by a novel Monte Carlo simulation-based fuzzy programming approach. The developed system was tested by a real-world case with consideration of various resource-oriented treatment technologies and the associated uncertainties. The modeling results indicated that the community-based bio-coal and household-based CH4 facilities were necessary and would become predominant in the waste management system. The 95% confidence intervals of waste loadings to the CH4 and bio-coal facilities were 387, 450 and 178, 215 tonne/day (mixed flow, respectively. In general, the developed system has high capability in supporting solid waste management for mixed rural-urban areas in a cost-efficient and sustainable manner under uncertainty.
SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification
Energy Technology Data Exchange (ETDEWEB)
Folkerts, M [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States); University of California, San Diego, La Jolla, CA (United States); Graves, Y [University of California, San Diego, La Jolla, CA (United States); Tian, Z; Gu, X; Jia, X; Jiang, S [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)
2014-06-01
Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is able to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.
SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification
International Nuclear Information System (INIS)
Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is able to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA
Flampouri, Stella; Jiang, Steve B.; Sharp, Greg C.; Wolfgang, John; Patel, Abhijit A.; Choi, Noah C.
2006-06-01
The purpose of this study is to accurately estimate the difference between the planned and the delivered dose due to respiratory motion and free breathing helical CT artefacts for lung IMRT treatments, and to estimate the impact of this difference on clinical outcome. Six patients with representative tumour motion, size and position were selected for this retrospective study. For each patient, we had acquired both a free breathing helical CT and a ten-phase 4D-CT scan. A commercial treatment planning system was used to create four IMRT plans for each patient. The first two plans were based on the GTV as contoured on the free breathing helical CT set, with a GTV to PTV expansion of 1.5 cm and 2.0 cm, respectively. The third plan was based on the ITV, a composite volume formed by the union of the CTV volumes contoured on free breathing helical CT, end-of-inhale (EOI) and end-of-exhale (EOE) 4D-CT. The fourth plan was based on GTV contoured on the EOE 4D-CT. The prescribed dose was 60 Gy for all four plans. Fluence maps and beam setup parameters of the IMRT plans were used by the Monte Carlo dose calculation engine MCSIM for absolute dose calculation on both the free breathing CT and 4D-CT data. CT deformable registration between the breathing phases was performed to estimate the motion trajectory for both the tumour and healthy tissue. Then, a composite dose distribution over the whole breathing cycle was calculated as a final estimate of the delivered dose. EUD values were computed on the basis of the composite dose for all four plans. For the patient with the largest motion effect, the difference in the EUD of CTV between the planed and the delivered doses was 33, 11, 1 and 0 Gy for the first, second, third and fourth plan, respectively. The number of breathing phases required for accurate dose prediction was also investigated. With the advent of 4D-CT, deformable registration and Monte Carlo simulations, it is feasible to perform an accurate calculation of the
Energy Technology Data Exchange (ETDEWEB)
Flampouri, Stella; Jiang, Steve B; Sharp, Greg C; Wolfgang, John; Patel, Abhijit A; Choi, Noah C [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 (United States)
2006-06-07
The purpose of this study is to accurately estimate the difference between the planned and the delivered dose due to respiratory motion and free breathing helical CT artefacts for lung IMRT treatments, and to estimate the impact of this difference on clinical outcome. Six patients with representative tumour motion, size and position were selected for this retrospective study. For each patient, we had acquired both a free breathing helical CT and a ten-phase 4D-CT scan. A commercial treatment planning system was used to create four IMRT plans for each patient. The first two plans were based on the GTV as contoured on the free breathing helical CT set, with a GTV to PTV expansion of 1.5 cm and 2.0 cm, respectively. The third plan was based on the ITV, a composite volume formed by the union of the CTV volumes contoured on free breathing helical CT, end-of-inhale (EOI) and end-of-exhale (EOE) 4D-CT. The fourth plan was based on GTV contoured on the EOE 4D-CT. The prescribed dose was 60 Gy for all four plans. Fluence maps and beam setup parameters of the IMRT plans were used by the Monte Carlo dose calculation engine MCSIM for absolute dose calculation on both the free breathing CT and 4D-CT data. CT deformable registration between the breathing phases was performed to estimate the motion trajectory for both the tumour and healthy tissue. Then, a composite dose distribution over the whole breathing cycle was calculated as a final estimate of the delivered dose. EUD values were computed on the basis of the composite dose for all four plans. For the patient with the largest motion effect, the difference in the EUD of CTV between the planed and the delivered doses was 33, 11, 1 and 0 Gy for the first, second, third and fourth plan, respectively. The number of breathing phases required for accurate dose prediction was also investigated. With the advent of 4D-CT, deformable registration and Monte Carlo simulations, it is feasible to perform an accurate calculation of the
International Nuclear Information System (INIS)
The purpose of this study is to accurately estimate the difference between the planned and the delivered dose due to respiratory motion and free breathing helical CT artefacts for lung IMRT treatments, and to estimate the impact of this difference on clinical outcome. Six patients with representative tumour motion, size and position were selected for this retrospective study. For each patient, we had acquired both a free breathing helical CT and a ten-phase 4D-CT scan. A commercial treatment planning system was used to create four IMRT plans for each patient. The first two plans were based on the GTV as contoured on the free breathing helical CT set, with a GTV to PTV expansion of 1.5 cm and 2.0 cm, respectively. The third plan was based on the ITV, a composite volume formed by the union of the CTV volumes contoured on free breathing helical CT, end-of-inhale (EOI) and end-of-exhale (EOE) 4D-CT. The fourth plan was based on GTV contoured on the EOE 4D-CT. The prescribed dose was 60 Gy for all four plans. Fluence maps and beam setup parameters of the IMRT plans were used by the Monte Carlo dose calculation engine MCSIM for absolute dose calculation on both the free breathing CT and 4D-CT data. CT deformable registration between the breathing phases was performed to estimate the motion trajectory for both the tumour and healthy tissue. Then, a composite dose distribution over the whole breathing cycle was calculated as a final estimate of the delivered dose. EUD values were computed on the basis of the composite dose for all four plans. For the patient with the largest motion effect, the difference in the EUD of CTV between the planed and the delivered doses was 33, 11, 1 and 0 Gy for the first, second, third and fourth plan, respectively. The number of breathing phases required for accurate dose prediction was also investigated. With the advent of 4D-CT, deformable registration and Monte Carlo simulations, it is feasible to perform an accurate calculation of the
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Smart detectors for Monte Carlo radiative transfer
Baes, Maarten
2008-01-01
Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...
Tseung, H Wan Chan; Kreofsky, C R; Ma, D; Beltran, C
2016-01-01
Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) based inverse biological planning for the treatment of head and neck tumors in spot-scanning proton therapy. Methods: Recently, a fast and accurate Graphics Processor Unit (GPU)-based MC simulation of proton transport was developed and used as the dose calculation engine in a GPU-accelerated IMPT optimizer. Besides dose, the dose-averaged linear energy transfer (LETd) can be simultaneously scored, which makes biological dose (BD) optimization possible. To convert from LETd to BD, a linear relation was assumed. Using this novel optimizer, inverse biological planning was applied to 4 patients: 2 small and 1 large thyroid tumor targets, and 1 glioma case. To create these plans, constraints were placed to maintain the physical dose (PD) within 1.25 times the prescription while maximizing target BD. For comparison, conventional IMRT and IMPT plans were created for each case in Eclipse (Varian, Inc). The same critical structure PD constraints were use...
Babilas, Rafał; Mariola, Kądziołka-Gaweł; Burian, Andrzej; Temleitner, László
2016-05-01
Selected soft magnetic amorphous alloys Fe80B20, Fe70Nb10B20 and Fe62Nb8B30 were produced by the melt-spinning and characterized by X-ray diffraction (XRD), transmission Mössbauer spectroscopy (MS), Reverse Monte Carlo modeling (RMC) and relative magnetic permeability measurements. The Mössbauer spectroscopy allowed to study the local environments of the Fe-centered atoms in the amorphous structure of binary and ternary glassy alloys. The MS provided also information about the changes in the amorphous structure due to the modification of chemical composition by various boron and niobium content. The RMC simulation based on the structure factors determined by synchrotron XRD measurements was also used in modeling of the atomic arrangements and short-range order in Fe-based model alloys. Addition of boron and niobium in the ternary model alloys affected the disorder in as-cast state and also influenced on the number of nearest neighbor Fe-Fe atoms, consequently. The distributions of Fe- and B-centered coordination numbers showed that N=10, 9 and 8 are dominated around Fe atoms and N=9, 8 and 7 had the largest population around B atoms in the examined amorphous alloys. Moreover, the relationship between the content of the alloying elements, the local atomic ordering and the magnetic permeability (magnetic after-effects) was mentioned.
Monte Carlo-based Bragg-Gray tissue-to-air mass-collision-stopping power ratios for ISO beta sources
International Nuclear Information System (INIS)
Quantity of interest in external beta radiation protection is the absorbed dose rate to tissue at a depth of 7 mg/cm2 Dt (7 mg/cm2) in a 4-element ICRU (International Commission for Radiation Units and Measurements) unit density tissue phantom. ISO (International Organization for Standardization) 6980-2 provides guidelines to establish this quantity for beta emitters using an extrapolation chamber as a primary standard. ISO 6980-1 proposes two series of beta reference radiation fields, namely, series 1 and series 2. Series 1 covers 90Sr/90Y, 85Kr, 204Tl and 147Pm sources used with beam flattening filter and Series 2 covers 14C and 106Ru/106Rh sources used with beam flattening filter. Dt (7 mg/cm2) is realized based on measured current and set of corrections including Bragg-Gray tissue-to-air mass-stopping power ratio, (S/ρ)t,a. ISO provides (S/ρ)t,a values which are based on approximate methods. The present study is aimed at calculating (S/ρ)t,a for 90Sr/90Y, 85Kr, 106Ru/106Rh and 147Pm sources using the Monte Carlo (MC) methods and compare the same against the ISO values. By definition, (S/ρ)t,a should be independent of cavity length of the chamber which was verified in the work
Analysis of Investment Risk Based on Monte Carlo Method%蒙特卡洛法在投资项目风险分析中的应用
Institute of Scientific and Technical Information of China (English)
王霞; 张本涛; 马庆
2011-01-01
本文以经济净现值为评价指标来度量项目的投资风险,确定各影响因素的概率分布,建立了基于三角分布的风险评价的随机模型,采用蒙特卡罗方法进行模拟,利用MATLAB编程实现,得到投资项目的净现值频数分布的直方图和累计频率曲线图,并对模拟结果进行统计和分析,可得到净现值的平均预测值以及风险率,为评价投资项目的风险提供了理论依据.%In this paper, based on the important economic evaluation index NPV, the paper measures the risk of investment projects, determines the probability distribution of various factors, establishes the risk evaluation of the stochastic model based on the triangular distribution, which is simulated using Monte Carlo method, and realises by MATLAB programming, then can get the frequency distribution histograms and cumulative frequency curve of the net present value of investment projects, predictive average value and the rate risk are obtained by statistic analysis,providing a theoretical basis for risk evaluation of investment projects.
2009-01-01
Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...
International Nuclear Information System (INIS)
A simulation toolkit, GATE (Geant4 Application for Tomographic Emission), was used to develop an accurate Monte Carlo (MC) simulation of a fully integrated 3T PET/MR hybrid imaging system (Siemens Biograph mMR). The PET/MR components of the Biograph mMR were simulated in order to allow a detailed study of variations of the system design on the PET performance, which are not easy to access and measure on a real PET/MR system. The 3T static magnetic field of the MR system was taken into account in all Monte Carlo simulations. The validation of the MC model was carried out against actual measurements performed on the PET/MR system by following the NEMA (National Electrical Manufacturers Association) NU 2-2007 standard. The comparison of simulated and experimental performance measurements included spatial resolution, sensitivity, scatter fraction, and count rate capability. The validated system model was then used for two different applications. The first application focused on investigating the effect of an extension of the PET field-of-view on the PET performance of the PET/MR system. The second application deals with simulating a modified system timing resolution and coincidence time window of the PET detector electronics in order to simulate time-of-flight (TOF) PET detection. A dedicated phantom was modeled to investigate the impact of TOF on overall PET image quality. Simulation results showed that the overall divergence between simulated and measured data was found to be less than 10%. Varying the detector geometry showed that the system sensitivity and noise equivalent count rate of the PET/MR system increased progressively with an increasing number of axial detector block rings, as to be expected. TOF-based PET reconstructions of the modeled phantom showed an improvement in signal-to-noise ratio and image contrast to the conventional non-TOF PET reconstructions. In conclusion, the validated MC simulation model of an integrated PET/MR system with an overall
International Nuclear Information System (INIS)
Measurement-based Monte Carlo (MBMC) simulation using a high definition (HD) phantom was used to evaluate the dose distribution in nasopharyngeal cancer (NPC) patients treated with intensity modulated radiation therapy (IMRT). Around nasopharyngeal cavity, there exists many small volume organs-at-risk (OARs) such as the optic nerves, auditory nerves, cochlea, and semicircular canal which necessitate the use of a high definition phantom for accurate and correct dose evaluation. The aim of this research was to study the advantages of using an HD phantom for MBMC simulation in NPC patients treated with IMRT. The MBMC simulation in this study was based on the IMRT treatment plan of three NPC patients generated by the anisotropic analytical algorithm (AAA) of the Eclipse treatment planning system (Varian Medical Systems, Palo Alto, CA, USA) using a calculation grid of 2 mm2. The NPC tumor was treated to a cumulative dose of 7000 cGy in 35 fractions using the shrinking-field sequential IMRT (SIMRT) method. The BEAMnrc MC Code was used to simulate a Varian EX21 linear accelerator treatment head. The HD phantom contained 0.5 × 0.5 × 1 mm3 voxels for the nasopharyngeal area and 0.5 × 0.5 × 3 mm3 for the rest of the head area. An efficiency map was obtained for the amorphous silicon aS1000 electronic portal imaging device (EPID) to adjust the weighting of each particle in the phase-space file for each IMRT beam. Our analysis revealed that small volume organs such as the eighth cranial nerve, semicircular canal, cochlea and external auditory canal showed an absolute dose difference of ≥200 cGy, while the dose difference for larger organs such as the parotid glands and tumor was negligible for the MBMC simulation using the HD phantom. The HD phantom was found to be suitable for Monte Carlo dose volume analysis of small volume organs. - Highlights: • HD dose evaluation for IMRT of NPC patients have been verified by the MC method. • MC results shows higher
Running Out Of and Into Oil. Analyzing Global Oil Depletion and Transition Through 2050
Energy Technology Data Exchange (ETDEWEB)
Greene, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hopson, Janet L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Jia [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2003-10-01
This report presents a risk analysis of world conventional oil resource production, depletion, expansion, and a possible transition to unconventional oil resources such as oil sands, heavy oil and shale oil over the period 2000 to 2050. Risk analysis uses Monte Carlo simulation methods to produce a probability distribution of outcomes rather than a single value.
Depletable resources and the economy.
Heijman, W. J. M.
1991-01-01
The subject of this thesis is the depletion of scarce resources. The main question to be answered is how to avoid future resource crises. After dealing with the complex relation between nature and economics, three important concepts in relation with resource depletion are discussed: steady state, time preference and efficiency.For the steady state, three variants are distinguished; the stationary state, the physical steady state and the state of steady growth. It is concluded that the so-call...
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
Oliveira, Carlos; Yoriyaz, Hélio; Oliveira, M. Carmo; Ferreira, L. M.
2004-01-01
In preview works the Portuguese Gamma Irradiation Facility, UTR, has been simulated using the MCNP code and the product to be irradiated has been drawn using the boolean operators with the MCNP surfaces. However, sometimes the product to be irradiated could have an irregular shape. The paper describes an alternative way for drawing the corresponding volume based on CT image data in a format of a 3D matrix of voxels. This data are read by a specific code called SCMS which transforms it into a MCNP input file. The dimensions of each MCNP voxel depend on the number of elements in the CT-based matrix. Additionally, the new approach allows one to know dose distributions anywhere without extra definitions of surfaces or volumes. Experimental dose measurements were carried out using Amber Perspex dosimeters. This work presents the results of MCNP simulations using both modeling modes - the standard mode and the voxel mode.
Bromberger, B.; Bar, D.; Brandis, M.; Dangendorf, V.; Goldberg, M B; Kaufmann, F.; Mor, I.; Nolte, R.; SCHMIEDEL M.; Tittelmeier, K.; Vartsky, D.; H. Wershofen
2012-01-01
An air cargo inspection system combining two nuclear reaction based techniques, namely Fast-Neutron Resonance Radiography and Dual-Discrete-Energy Gamma Radiography is currently being developed. This system is expected to allow detection of standard and improvised explosives as well as special nuclear materials. An important aspect for the applicability of nuclear techniques in an airport inspection facility is the inventory and lifetimes of radioactive isotopes produced by the neutron and ga...
How Depleted is the MORB mantle?
Hofmann, A. W.; Hart, S. R.
2015-12-01
Knowledge of the degree of mantle depletion of highly incompatible elements is critically important for assessing Earth's internal heat production and Urey number. Current views of the degree of MORB source depletion are dominated by Salters and Stracke (2004), and Workman and Hart (2005). The first is based on an assessment of average MORB compositions, whereas the second considers trace element data of oceanic peridotites. Both require an independent determination of one absolute concentration, Lu (Salters & Stracke), or Nd (Workman & Hart). Both use parent-daughter ratios Lu/Hf, Sm/Nd, and Rb/Sr calculated from MORB isotopes combined with continental-crust extraction models, as well as "canonical" trace element ratios, to boot-strap the full range of trace element abundances. We show that the single most important factor in determining the ultimate degree of incompatible element depletion in the MORB source lies in the assumptions about the timing of continent extraction, exemplified by continuous extraction versus simple two-stage models. Continued crust extraction generates additional, recent mantle depletion, without affecting the isotopic composition of the residual mantle significantly. Previous emphasis on chemical compositions of MORB and/or peridotites has tended to obscure this. We will explore the effect of different continent extraction models on the degree of U, Th, and K depletion in the MORB source. Given the uncertainties of the two most popular models, the uncertainties of U and Th in DMM are at least ±50%, and this impacts the constraints on the terrestrial Urey ratio. Salters, F.J.M. and Stracke, A., 2004, Geochem. Geophys. Geosyst. 5, Q05004. Workman, R.K. and Hart, S.R., 2005, EPSL 231, 53-72.
Energy Technology Data Exchange (ETDEWEB)
Kurosu, Keita [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Takashina, Masaaki; Koizumi, Masahiko [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P., E-mail: vadim.p.moskvin@gmail.com [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States)
2014-10-01
Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.
Energy Technology Data Exchange (ETDEWEB)
Kim, Song Hyun; Woo, Myeong Hyun; Shin, Chang Ho [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Hyun Chul; Noh, Jea Man [Hanyang University, Seoul (Korea, Republic of)
2015-05-15
The Monte Carlo (MC) method is a stochastic approach to estimate the particle transport behaviors with the simulation of each particle. When fissionable materials are included in MC neutron simulation, the fissionable materials can be potential neutron sources. To avoid the infinite repetition and to sample the fission neutron source positions, the MC power iteration method has been generally used. However, it is well known that the power iteration method causes some problems about fission source convergence and real variance. In addition, for the MC simulation with the power iteration method, the simulation information in each cycle should be gathered to use them in the next cycle simulation. To overcome the problems, a lot of theories which are diagnostics and acceleration methods of the fission source convergence, estimation method of the real variance have been studied. The key idea of the proposed method is that the fuel region is divided to the coarse-fine meshes; then, the eigenvalue calculation is pursued using coarse mesh-based fission matrix with fixed source MC simulation. As a result, the eigenvalue calculation can be performed without the diagnostics of the source convergence and real variance problems occurred by using MC power iteration method. Also, the proposed method can solve the computation memory problem for the fission matrix generation.
Montanari, Davide; Silvestri, Chiara; Graves, Yan J; Yan, Hao; Cervino, Laura; Rice, Roger; Jiang, Steve B; Jia, Xun
2013-01-01
Cone beam CT (CBCT) has been widely used for patient setup in image guided radiation therapy (IGRT). Radiation dose from CBCT scans has become a clinical concern. The purposes of this study are 1) to commission a GPU-based Monte Carlo (MC) dose calculation package gCTD for Varian On-Board Imaging (OBI) system and test the calculation accuracy, and 2) to quantitatively evaluate CBCT dose from the OBI system in typical IGRT scan protocols. We first conducted dose measurements in a water phantom. X-ray source model parameters used in gCTD are obtained through a commissioning process. gCTD accuracy is demonstrated by comparing calculations with measurements in water and in CTDI phantoms. 25 brain cancer patients are used to study dose in a standard-dose head protocol, and 25 prostate cancer patients are used to study dose in pelvis protocol and pelvis spotlight protocol. Mean dose to each organ is calculated. Mean dose to 2% voxels that have the highest dose is also computed to quantify the maximum dose. It is fo...
Bieda, Bogusław
2014-05-15
The purpose of the paper is to present the results of application of stochastic approach based on Monte Carlo (MC) simulation for life cycle inventory (LCI) data of Mittal Steel Poland (MSP) complex in Kraków, Poland. In order to assess the uncertainty, the software CrystalBall® (CB), which is associated with Microsoft® Excel spreadsheet model, is used. The framework of the study was originally carried out for 2005. The total production of steel, coke, pig iron, sinter, slabs from continuous steel casting (CSC), sheets from hot rolling mill (HRM) and blast furnace gas, collected in 2005 from MSP was analyzed and used for MC simulation of the LCI model. In order to describe random nature of all main products used in this study, normal distribution has been applied. The results of the simulation (10,000 trials) performed with the use of CB consist of frequency charts and statistical reports. The results of this study can be used as the first step in performing a full LCA analysis in the steel industry. Further, it is concluded that the stochastic approach is a powerful method for quantifying parameter uncertainty in LCA/LCI studies and it can be applied to any steel industry. The results obtained from this study can help practitioners and decision-makers in the steel production management. PMID:24290145
Jung, Joo-Young; Lu, Bo; Yoon, Do-Kun; Hong, Key Jo; Jang, HongSeok; Liu, Chihray; Suh, Tae Suk
2016-04-01
We confirmed the feasibility of using our proposed system to extract two different kinds of functional images from a positron emission tomography (PET) module by using an insertable collimator during boron neutron capture therapy (BNCT). Coincidence events from a tumor region that included boron particles were identified by a PET scanner before BNCT; subsequently, the prompt gamma ray events from the same tumor region were collected after exposure to an external neutron beam through an insertable collimator on the PET detector. Five tumor regions that contained boron particles and were located in the water phantom and in the BNCT system with the PET module were simulated with Monte Carlo simulation code. The acquired images were quantitatively analyzed. Based on the receiver operating characteristic (ROC) curves in the five boron regions, A, B, C, D, and E, the PET and single-photon images were 10.2%, 11.7%, 8.2% (center region), 12.6%, and 10.5%, respectively. We were able to acquire simultaneously PET and single prompt photon images for tumor regions monitoring by using an insertable collimator without any additional isotopes. PMID:26970679
Kadoura, Ahmad Salim
2013-06-01
In this work, a method to estimate solid elemental sulfur solubility in pure and gas mixtures using Monte Carlo (MC) molecular simulation is proposed. This method is based on Isobaric-Isothermal (NPT) ensemble and the Widom insertion technique for the gas phase and a continuum model for the solid phase. This method avoids the difficulty of having to deal with high rejection rates that are usually encountered when simulating using Gibbs ensemble. The application of this method is tested with a system made of pure hydrogen sulfide gas (H2S) and solid elemental sulfur. However, this technique may be used for other solid-vapor systems provided the fugacity of the solid phase is known (e.g., through experimental work). Given solid fugacity at the desired pressure and temperature, the mole fraction of the solid dissolved in gas that would be in chemical equilibrium with the solid phase might be obtained. In other words a set of MC molecular simulation experiments is conducted on a single box given the pressure and temperature and for different mole fractions of the solute. The fugacity of the gas mixture is determined using the Widom insertion method and is compared with that predetermined for the solid phase until one finds the mole fraction which achieves the required fugacity. In this work, several examples of MC have been conducted and compared with experimental data. The Lennard-Jones parameters related to the sulfur molecule model (ɛ, σ) have been optimized to achieve better match with the experimental work.
Chao, T. C.; Xu, X. G.
2001-04-01
VIP-Man is a whole-body anatomical model newly developed at Rensselaer from the high-resolution colour images of the National Library of Medicine's Visible Human Project. This paper summarizes the use of VIP-Man and the Monte Carlo method to calculate specific absorbed fractions from internal electron emitters. A specially designed EGS4 user code, named EGS4-VLSI, was developed to use the extremely large number of image data contained in the VIP-Man. Monoenergetic and isotropic electron emitters with energies from 100 keV to 4 MeV are considered to be uniformly distributed in 26 organs. This paper presents, for the first time, results of internal electron exposures based on a realistic whole-body tomographic model. Because VIP-Man has many organs and tissues that were previously not well defined (or not available) in other models, the efforts at Rensselaer and elsewhere bring an unprecedented opportunity to significantly improve the internal dosimetry.
Pineda Rojas, Andrea L.; Venegas, Laura E.; Mazzeo, Nicolás A.
2016-09-01
A simple urban air quality model [MODelo de Dispersión Atmosférica Ubana - Generic Reaction Set (DAUMOD-GRS)] was recently developed. One-hour peak O3 concentrations in the Metropolitan Area of Buenos Aires (MABA) during the summer estimated with the DAUMOD-GRS model have shown values lower than 20 ppb (the regional background concentration) in the urban area and levels greater than 40 ppb in its surroundings. Due to the lack of measurements outside the MABA, these relatively high ozone modelled concentrations constitute the only estimate for the area. In this work, a methodology based on the Monte Carlo analysis is implemented to evaluate the uncertainty in these modelled concentrations associated to possible errors of the model input data. Results show that the larger 1-h peak O3 levels in the MABA during the summer present larger uncertainties (up to 47 ppb). On the other hand, multiple linear regression analysis is applied at selected receptors in order to identify the variables explaining most of the obtained variance. Although their relative contributions vary spatially, the uncertainty of the regional background O3 concentration dominates at all the analysed receptors (34.4-97.6%), indicating that their estimations could be improved to enhance the ability of the model to simulate peak O3 concentrations in the MABA.
Sahu, Nityananda; Gadre, Shridhar R; Rakshit, Avijit; Bandyopadhyay, Pradipta; Miliordos, Evangelos; Xantheas, Sotiris S
2014-10-28
We report new global minimum candidate structures for the (H2O)25 cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving sampling of the cluster's Potential Energy Surface with the Effective Fragment Potential, subsequent geometry optimization using the Molecular Tailoring Approach with the fragments treated at the second order Møller-Plesset (MP2) perturbation (MTA-MP2) and final refinement of the entire cluster at the MP2 level of theory. The MTA-MP2 optimized cluster geometries, constructed from the fragments, were found to be within <0.5 kcal/mol from the minimum geometries obtained from the MP2 optimization of the entire (H2O)25 cluster. In addition, the grafting of the MTA-MP2 energies yields electronic energies that are within <0.3 kcal/mol from the MP2 energies of the entire cluster while preserving their energy rank order. Finally, the MTA-MP2 approach was found to reproduce the MP2 harmonic vibrational frequencies, constructed from the fragments, quite accurately when compared to the MP2 ones of the entire cluster in both the HOH bending and the OH stretching regions of the spectra.
International Nuclear Information System (INIS)
The Monte Carlo (MC) method is a stochastic approach to estimate the particle transport behaviors with the simulation of each particle. When fissionable materials are included in MC neutron simulation, the fissionable materials can be potential neutron sources. To avoid the infinite repetition and to sample the fission neutron source positions, the MC power iteration method has been generally used. However, it is well known that the power iteration method causes some problems about fission source convergence and real variance. In addition, for the MC simulation with the power iteration method, the simulation information in each cycle should be gathered to use them in the next cycle simulation. To overcome the problems, a lot of theories which are diagnostics and acceleration methods of the fission source convergence, estimation method of the real variance have been studied. The key idea of the proposed method is that the fuel region is divided to the coarse-fine meshes; then, the eigenvalue calculation is pursued using coarse mesh-based fission matrix with fixed source MC simulation. As a result, the eigenvalue calculation can be performed without the diagnostics of the source convergence and real variance problems occurred by using MC power iteration method. Also, the proposed method can solve the computation memory problem for the fission matrix generation
Lee, Seung-Wan; Choi, Yu-Na; Cho, Hyo-Min; Lee, Young-Jin; Ryu, Hyun-Ju; Kim, Hee-Joung
2012-08-01
The energy-resolved photon counting detector provides the spectral information that can be used to generate images. The novel imaging methods, including the K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging, are based on the energy-resolved photon counting detector and can be realized by using various energy windows or energy bins. The location and width of the energy windows or energy bins are important because these techniques generate an image using the spectral information defined by the energy windows or energy bins. In this study, the reconstructed images acquired with K-edge imaging, projection-based energy weighting imaging and image-based energy weighting imaging were simulated using the Monte Carlo simulation. The effect of energy windows or energy bins was investigated with respect to the contrast, coefficient-of-variation (COV) and contrast-to-noise ratio (CNR). The three images were compared with respect to the CNR. We modeled the x-ray computed tomography system based on the CdTe energy-resolved photon counting detector and polymethylmethacrylate phantom, which have iodine, gadolinium and blood. To acquire K-edge images, the lower energy thresholds were fixed at K-edge absorption energy of iodine and gadolinium and the energy window widths were increased from 1 to 25 bins. The energy weighting factors optimized for iodine, gadolinium and blood were calculated from 5, 10, 15, 19 and 33 energy bins. We assigned the calculated energy weighting factors to the images acquired at each energy bin. In K-edge images, the contrast and COV decreased, when the energy window width was increased. The CNR increased as a function of the energy window width and decreased above the specific energy window width. When the number of energy bins was increased from 5 to 15, the contrast increased in the projection-based energy weighting images. There is a little difference in the contrast, when the number of energy bin is
International Nuclear Information System (INIS)
In nuclear medicine and particularly in internal radiotherapy, one of the major challenges is to determine the dose for each patient. Current internal dosimetric estimations are based on the Medical Internal Radiation Dose (MIRD) formalism and use standard mathematical models. These standard models are often far from a given patient morphology and do not allow to perform patient-specific dosimetry. Therefore, the Laboratory of Internal Dose Assessment of IRSN has developed an innovative software named OEDIPE, French acronym for 'tool for personalised internal dose assessment', in collaboration with the French Institute of Health and Medical Research of Nantes (U892). This software is a user-friendly graphical interface that takes into account specific patient parameters. Indeed, it allows the creation of voxel phantoms based on the patient anatomical image and directly prepares the MCNPX input file suitable for dose calculations. Radionuclides can be distributed at the organ and voxel scale, using cumulated activities based on tomographic images. Absorbed dose calculation can also be performed at these scales, in allowing notably the visualisation of superimposed isodose curves and anatomical images. It could also take into account the temporal distribution of radiopharmaceuticals within the body. OEDIPE has already been validated by comparison either with others codes and with experimental data. The study presented here is a first approach in internal radiotherapy. It deals with a personalised dose calculation carried out for the treatment of hepatocellular carcinoma. Thus, as a result of its flexibility in accommodating complex geometry, the method developed not only represents a diagnostic tool, but also opens up exciting new possibilities such as the optimisation of protocols, in nuclear medicine and in particular in targeted radiotherapy. (author)
Educational software on the ozone layer Depletion
Psomiadis, Ploutarchos; Chalkidis, Anthimos; Saridaki, Anna; Tampakis, Constantine (Konstantinos); Skordoulis, Constantine
2007-01-01
This paper describes the design and the formative evaluation of educational software concerning the ‘Depletion of the Ozone Layer’ designed for the students of the Faculty of Primary Education (pre-service teachers) of the National and Kapodistrian University of Athens. The selection of the topic was based on: i) environmental criteria (importance of the phenomenon, complexity of the phenomenon), ii) societal criteria (local interest, human activities effects), iii) pedagogical cr...
International Nuclear Information System (INIS)
Technical developments in radiotherapy (RT) have created a need for systematic quality assurance (QA) to ensure that clinical institutions deliver prescribed radiation doses consistent with the requirements of clinical protocols. For QA, an ideal dose verification system should be independent of the treatment-planning system (TPS). This paper describes the development and reproducibility evaluation of a Monte Carlo (MC)-based standard LINAC model as a preliminary requirement for independent verification of dose distributions. The BEAMnrc MC code is used for characterization of the 6-, 10- and 15-MV photon beams for a wide range of field sizes. The modeling of the LINAC head components is based on the specifications provided by the manufacturer. MC dose distributions are tuned to match Varian Golden Beam Data (GBD). For reproducibility evaluation, calculated beam data is compared with beam data measured at individual institutions. For all energies and field sizes, the MC and GBD agreed to within 1.0% for percentage depth doses (PDDs), 1.5% for beam profiles and 1.2% for total scatter factors (Scps). Reproducibility evaluation showed that the maximum average local differences were 1.3% and 2.5% for PDDs and beam profiles, respectively. MC and institutions' mean Scps agreed to within 2.0%. An MC-based standard LINAC model developed to independently verify dose distributions for QA of multi-institutional clinical trials and routine clinical practice has proven to be highly accurate and reproducible and can thus help ensure that prescribed doses delivered are consistent with the requirements of clinical protocols. (author)
Boswell, Melissa; Detwiler, Jason A; Finnerty, Padraic; Henning, Reyco; Gehman, Victor M; Johnson, Rob A; Jordan, David V; Kazkaz, Kareem; Knapp, Markus; Kröninger, Kevin; Lenz, Daniel; Leviner, Lance; Liu, Jing; Liu, Xiang; MacMullin, Sean; Marino, Michael G; Mokhtarani, Akbar; Pandola, Luciano; Schubert, Alexis G; Schubert, Jens; Tomei, Claudia; Volynets, Oleksandr
2010-01-01
We describe a physics simulation software framework, MAGE, that is based on the GEANT4 simulation toolkit. MAGE is used to simulate the response of ultra-low radioactive background radiation detectors to ionizing radiation, specifically the MAJORANA and GERDA neutrinoless double-beta decay experiments. MAJORANA and GERDA use high-purity germanium detectors to search for the neutrinoless double-beta decay of 76Ge, and MAGE is jointly developed between these two collaborations. The MAGE framework contains the geometry models of common objects, prototypes, test stands, and the actual experiments. It also implements customized event generators, GEANT4 physics lists, and output formats. All of these features are available as class libraries that are typically compiled into a single executable. The user selects the particular experimental setup implementation at run-time via macros. The combination of all these common classes into one framework reduces duplication of efforts, eases comparison between simulated data...
Bromberger, B; Brandis, M; Dangendorf, V; Goldberg, M B; Kaufmann, F; Mor, I; Nolte, R; Schmiedel, M; Tittelmeier, K; Vartsky, D; Wershofen, H
2012-01-01
An air cargo inspection system combining two nuclear reaction based techniques, namely Fast-Neutron Resonance Radiography and Dual-Discrete-Energy Gamma Radiography is currently being developed. This system is expected to allow detection of standard and improvised explosives as well as special nuclear materials. An important aspect for the applicability of nuclear techniques in an airport inspection facility is the inventory and lifetimes of radioactive isotopes produced by the neutron and gamma radiation inside the cargo, as well as the dose delivered by these isotopes to people in contact with the cargo during and following the interrogation procedure. Using MCNPX and CINDER90 we have calculated the activation levels for several typical inspection scenarios. One example is the activation of various metal samples embedded in a cotton-filled container. To validate the simulation results, a benchmark experiment was performed, in which metal samples were activated by fast-neutrons in a water-filled glass jar. T...
Leonardo Rossi
Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of Monte Carlo. Welcome to Los Alamos, the birthplace of “Monte Carlo” for computational physics. Stanislaw Ulam, John von Neumann, and Nicholas Metropolis are credited as the founders of modern Monte Carlo methods. The name “Monte Carlo” was chosen in reference to the Monte Carlo Casino in Monaco (purportedly a place where Ulam’s uncle went to gamble). The central idea (for us) – to use computer-generated “random” numbers to determine expected values or estimate equation solutions – has since spread to many fields. "The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than “abstract thinking” might not be to lay it out say one hundred times and simply observe and count the number of successful plays... Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations." - Stanislaw Ulam.
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Alternative Monte Carlo Approach for General Global Illumination
Institute of Scientific and Technical Information of China (English)
徐庆; 李朋; 徐源; 孙济洲
2004-01-01
An alternative Monte Carlo strategy for the computation of global illumination problem was presented.The proposed approach provided a new and optimal way for solving Monte Carlo global illumination based on the zero variance importance sampling procedure. A new importance driven Monte Carlo global illumination algorithm in the framework of the new computing scheme was developed and implemented. Results, which were obtained by rendering test scenes, show that this new framework and the newly derived algorithm are effective and promising.
Combinatorial nuclear level density by a Monte Carlo method
Cerf, N.
1993-01-01
We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning t...
Institute of Scientific and Technical Information of China (English)
徐小波; 张鹤鸣; 胡辉勇
2011-01-01
文章研究了SOI衬底上SiGe npn异质结晶体管集电结耗尽电荷和电容.根据器件实际工作情况,基于课题组前面的工作,对耗尽电荷和电容模型进行扩展和优化.研究结果表明,耗尽电荷模型具有更好的光滑性;耗尽电容模型为纵向耗尽与横向耗尽电容的串联,考虑了不同电流流动面积,与常规器件相比,SOI器件全耗尽工作模式下表现出更小的集电结耗尽电容,因此更大的正向Early电压;在纵向工作模式到横向工作模式转变的电压偏置点,耗尽电荷和电容的变化趋势发生改变.SOI薄膜上纵向SiGe HBT集电结耗尽电荷和电容模型的建立和扩展为毫米波SOI BiCMOS工艺中双极器件核心参数如Early电压、特征频率等的设计提供了有价值的参考.%The SiGe heterojunction bipolar transistor （HBT） on thin film SOI is successfully integrated with SOI CMOS by ＂folded collector＂.This paper deals with the collector depletion charge and the capacitance of this structure.An optimized model is presented based on our previous research.The results show that the charge model is smoother,and that the capacitance model with considering different current flow areas,is vertical and horizontal depletion capacitances in series,showing that the depletion capacitance is smaller than that of a bulk HBT.The charge and capacitance vary with the increase of reverse collector-base bias.This collector depletion charge and capacitance model provides valuable reference to the SOI SiGe HBT electrical parameters design and simulation such as Early voltage and transit frequency in the latest 0.13μm SOI BiCMOS technology.
Overview of the MCU Monte Carlo software package
International Nuclear Information System (INIS)
Highlights: • MCU is the Monte Carlo code for particle transport in 3D systems with depletion. • Criticality and fixed source problems are solved using pure point-wise approximation. • MCU is parallelized with MPI in three different modes. • MCU has coolant, fuel and xenon feedback for VVER calculations. • MCU is verified for reactors with thermal, intermediate and fast neutron spectrum. - Abstract: MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented
Monte Carlo dose computation for IMRT optimization*
Laub, W.; Alber, M.; Birkner, M.; Nüsslin, F.
2000-07-01
A method which combines the accuracy of Monte Carlo dose calculation with a finite size pencil-beam based intensity modulation optimization is presented. The pencil-beam algorithm is employed to compute the fluence element updates for a converging sequence of Monte Carlo dose distributions. The combination is shown to improve results over the pencil-beam based optimization in a lung tumour case and a head and neck case. Inhomogeneity effects like a broader penumbra and dose build-up regions can be compensated for by intensity modulation.
International Nuclear Information System (INIS)
Digital zenith camera systems (DZCS) are dedicated astronomical-geodetic measurement systems for the observation of the direction of the plumb line. A DZCS key component is a pair of tilt meters for the determination of the instrumental tilt with respect to the plumb line. Highest accuracy (i.e., 0.1 arc-seconds or better) is achieved in practice through observation with precision tilt meters in opposite faces (180° instrumental rotation), and application of rigorous tilt reduction models. A novel concept proposes the development of a hexapod (Stewart platform)-based DZCS. However, hexapod-based total rotations are limited to about 30°–60° in azimuth (equivalent to ±15° to ±30° yaw rotation), which raises the question of the impact of the rotation angle between the two faces on the accuracy of the tilt measurement. The goal of the present study is the investigation of the expected accuracy of tilt measurements to be carried out on future hexapod-based DZCS, with special focus placed on the role of the limited rotation angle. A Monte-Carlo simulation study is carried out in order to derive accuracy estimates for the tilt determination as a function of several input parameters, and the results are validated against analytical error propagation. As the main result of the study, limitation of the instrumental rotation to 60° (30°) deteriorates the tilt accuracy by a factor of about 2 (4) compared to a 180° rotation between the faces. Nonetheless, a tilt accuracy at the 0.1 arc-second level is expected when the rotation is at least 45°, and 0.05 arc-second (about 0.25 microradian) accurate tilt meters are deployed. As such, a hexapod-based DZCS can be expected to allow sufficiently accurate determination of the instrumental tilt. This provides supporting evidence for the feasibility of such a novel instrumentation. The outcomes of our study are not only relevant to the field of DZCS, but also to all other types of instruments where the instrumental tilt
Altman, Michael B; Jin, Jian-Yue; Kim, Sangroh; Wen, Ning; Liu, Dezhi; Siddiqui, M Salim; Ajlouni, Munther I; Movsas, Benjamin; Chetty, Indrin J
2012-01-01
Current commercially available planning systems with Monte Carlo (MC)-based final dose calculation in IMRT planning employ pencil-beam (PB) algorithms in the optimization process. Consequently, dose coverage for SBRT lung plans can feature cold-spots at the interface between lung and tumor tissue. For lung wall (LW)-seated tumors, there can also be hot spots within nearby normal organs (example: ribs). This study evaluated two different practical approaches to limiting cold spots within the target and reducing high doses to surrounding normal organs in MC-based IMRT planning of LW-seated tumors. First, "iterative reoptimization", where the MC calculation (with PB-based optimization) is initially performed. The resultant cold spot is then contoured and used as a simultaneous boost volume. The MC-based dose is then recomputed. The second technique uses noncoplanar beam angles with limited path through lung tissue. Both techniques were evaluated against a conventional coplanar beam approach with a single MC calculation. In all techniques the prescription dose was normalized to cover 95% of the PTV. Fifteen SBRT lung cases with LW-seated tumors were planned. The results from iterative reoptimization showed that conformity index (CI) and/or PTV dose uniformity (UPTV) improved in 12/15 plans. Average improvement was 13%, and 24%, respectively. Nonimproved plans had PTVs near the skin, trachea, and/or very small lung involvement. The maximum dose to 1cc volume (D1cc) of surrounding OARs decreased in 14/15 plans (average 10%). Using noncoplanar beams showed an average improvement of 7% in 10/15 cases and 11% in 5/15 cases for CI and UPTV, respectively. The D1cc was reduced by an average of 6% in 10/15 cases to surrounding OARs. Choice of treatment planning technique did not statistically significantly change lung V5. The results showed that the proposed practical approaches enhance dose conformity in MC-based IMRT planning of lung tumors treated with SBRT, improving target
Health and environmental impact of depleted uranium
International Nuclear Information System (INIS)
Depleted Uranium (DU) is 'nuclear waste' produced from the enrichment process and is mostly made up of 238U and is depleted in the fissionable isotope 235U compared to natural uranium (NU). Depleted uranium has about 60% of the radioactivity of natural uranium. Depleted uranium and natural uranium are identical in terms of the chemical toxicity. Uranium's high density gives depleted uranium shells increased range and penetrative power. This density, combined with uranium's pyrophoric nature, results in a high-energy kinetic weapon that can punch and burn through armour plating. Striking a hard target, depleted uranium munitions create extremely high temperatures. The uranium immediately burns and vaporizes into an aerosol, which is easily diffused in the environment. People can inhale the micro-particles of uranium oxide in an aerosol and absorb them mainly from lung. Depleted uranium has both aspects of radiological toxicity and chemical toxicity. The possible synergistic effect of both kinds of toxicities is also pointed out. Animal and cellular studies have been reported the carcinogenic, neurotoxic, immuno-toxic and some other effects of depleted uranium including the damage on reproductive system and foetus. In addition, the health effects of micro/ nano-particles, similar in size of depleted uranium aerosols produced by uranium weapons, have been reported. Aerosolized DU dust can easily spread over the battlefield spreading over civilian areas, sometimes even crossing international borders. Therefore, not only the military personnel but also the civilians can be exposed. The contamination continues after the cessation of hostilities. Taking these aspects into account, DU weapon is illegal under international humanitarian laws and is considered as one of the inhumane weapons of 'indiscriminate destruction'. The international society is now discussing the prohibition of DU weapons based on 'precautionary principle'. The 1991 Gulf War is reportedly the first
Choi, D; Perrin, M; Hoffmann, S; Chang, A E; Ratanatharathorn, V; Uberti, J; McDonagh, K T; Mulé, J J
1998-11-01
We are investigating the use of tumor-pulsed dendritic cell (DC)-based vaccines in the treatment of patients with advanced cancer. In the current study, we evaluated the feasibility of obtaining both CD34+ hematopoietic stem/ progenitor cells (HSCs) and functional DCs from the same leukapheresis collection in adequate numbers for both peripheral blood stem cell transplantation (PBSCT) and immunization purposes, respectively. Leukapheresis collections of mobilized peripheral blood mononuclear cells (PBMCs) were obtained from normal donors receiving granulocyte colony-stimulating factor (G-CSF) (for allogeneic PBSCT) and from intermediate grade non-Hodgkin's lymphoma or multiple myeloma patients receiving cyclophosphamide plus G-CSF (for autologous PBSCT). High enrichment of CD34+ HSCs was obtained using an immunomagnetic bead cell separation device. After separation, the negative fraction of mobilized PBMCs from normal donors and cancer patients contained undetectable levels of CD34+ HSCs by flow cytometry. This fraction of cells was then subjected to plastic adherence, and the adherent cells were cultured for 7 days in GM-CSF (100 ng/ml) and interleukin 4 (50 ng/ml) followed by an additional 7 days in GM-CSF, interleukin 4, and tumor necrosis factor alpha (10 ng/ml) to generate DCs. Harvested DCs represented yields of 4.1+/-1.4 and 5.8+/-5.4% of the initial cells plated from the CD34+ cell-depleted mobilized PBMCs of normal donors and cancer patients, respectively, and displayed a high level expression of CD80, CD86, HLA-DR, and CD11c but not CD14. This phenotypic profile was similar to that of DCs derived from non-CD34+ cell-depleted mobilized PBMCs. DCs generated from CD34+ cell-depleted mobilized PBMCs elicited potent antitetanus as well as primary allogeneic T-cell proliferative responses in vitro, which were equivalent to DCs derived from non-CD34+ cell-depleted mobilized PBMCs. Collectively, these results demonstrate the feasibility of obtaining both DCs and
International Nuclear Information System (INIS)
Purpose: An in-house Monte Carlo based treatment planning system (MC TPS) has been developed for modulated electron radiation therapy (MERT). Our preliminary MERT planning experience called for a more user friendly graphical user interface. The current work aimed to design graphical windows and tools to facilitate the contouring and planning process. Methods: Our In-house GUI MC TPS is built on a set of EGS4 user codes namely MCPLAN and MCBEAM in addition to an in-house optimization code, which was named as MCOPTIM. Patient virtual phantom is constructed using the tomographic images in DICOM format exported from clinical treatment planning systems (TPS). Treatment target volumes and critical structures were usually contoured on clinical TPS and then sent as a structure set file. In our GUI program we developed a visualization tool to allow the planner to visualize the DICOM images and delineate the various structures. We implemented an option in our code for automatic contouring of the patient body and lungs. We also created an interface window displaying a three dimensional representation of the target and also showing a graphical representation of the treatment beams. Results: The new GUI features helped streamline the planning process. The implemented contouring option eliminated the need for performing this step on clinical TPS. The auto detection option for contouring the outer patient body and lungs was tested on patient CTs and it was shown to be accurate as compared to that of clinical TPS. The three dimensional representation of the target and the beams allows better selection of the gantry, collimator and couch angles. Conclusion: An in-house GUI program has been developed for more efficient MERT planning. The application of aiding tools implemented in the program is time saving and gives better control of the planning process
Edimo, P; Kwato Njock, M G; Vynckier, S
2013-11-01
The purpose of the present study is to perform a clinical validation of a new commercial Monte Carlo (MC) based treatment planning system (TPS) for electron beams, i.e. the XiO 4.60 electron MC (XiO eMC). Firstly, MC models for electron beams (4, 8, 12 and 18 MeV) have been simulated using BEAMnrc user code and validated by measurements in a homogeneous water phantom. Secondly, these BEAMnrc models have been set as the reference tool to evaluate the ability of XiO eMC to reproduce dose perturbations in the heterogeneous phantom. In the homogeneous phantom calculations, differences between MC computations (BEAMnrc, XiO eMC) and measurements are less than 2% in the homogeneous dose regions and less than 1 mm shifting in the high dose gradient regions. As for the heterogeneous phantom, the accuracy of XiO eMC has been benchmarked with predicted BEAMnrc models. In the lung tissue, the overall agreement between the two schemes lies under 2.5% for the most tested dose distributions at 8, 12 and 18 MeV and is better than the 4 MeV one. In the non-lung tissue, a good agreement has been found between BEAMnrc simulation and XiO eMC computation for 8, 12 and 18 MeV. Results are worse in the case of 4 MeV calculations (discrepancies ≈ 4%). XiO eMC can predict dose perturbation induced by high-density heterogeneities for 8, 12 and 18 MeV. However, significant deviations found in the case of 4 MeV demonstrate that caution is necessary in using XiO eMC at lower electron energies. PMID:23010450
Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy
2016-03-01
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.
Ku, B.; Nam, M.
2012-12-01
Neutron logging has been widely used to estimate neutron porosity to evaluate formation properties in oil industry. More recently, neutron logging has been highlighted for monitoring the behavior of CO2 injected into reservoir for geological CO2 sequestration. For a better understanding of neutron log interpretation, Monte Carlo N-Particle (MCNP) algorithm is used to illustrate the response of a neutron tool. In order to obtain calibration curves for the neutron tool, neutron responses are simulated in water-filled limestone, sandstone and dolomite formations of various porosities. Since the salinities (concentration of NaCl) of borehole fluid and formation water are important factors for estimating formation porosity, we first compute and analyze neutron responses for brine-filled formations with different porosities. Further, we consider changes in brine saturation of a reservoir due to hydrocarbon production or geological CO2 sequestration to simulate corresponding neutron logging data. As gas saturation decreases, measured neutron porosity confirms gas effects on neutron logging, which is attributed to the fact that gas has slightly smaller number of hydrogen than brine water. In the meantime, increase in CO2 saturation due to CO2 injection reduces measured neutron porosity giving a clue to estimation the CO2 saturation, since the injected CO2 substitute for the brine water. A further analysis on the reduction gives a strategy for estimating CO2 saturation based on time-lapse neutron logging. This strategy can help monitoring not only geological CO2 sequestration but also CO2 flood for enhanced-oil-recovery. Acknowledgements: This work was supported by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2012T100201588). Myung Jin Nam was partially supported by the National Research Foundation of Korea(NRF) grant funded by the Korea
Montanari, Davide; Scolari, Enrica; Silvestri, Chiara; Jiang Graves, Yan; Yan, Hao; Cervino, Laura; Rice, Roger; Jiang, Steve B.; Jia, Xun
2014-03-01
Cone beam CT (CBCT) has been widely used for patient setup in image-guided radiation therapy (IGRT). Radiation dose from CBCT scans has become a clinical concern. The purposes of this study are (1) to commission a graphics processing unit (GPU)-based Monte Carlo (MC) dose calculation package gCTD for Varian On-Board Imaging (OBI) system and test the calculation accuracy, and (2) to quantitatively evaluate CBCT dose from the OBI system in typical IGRT scan protocols. We first conducted dose measurements in a water phantom. X-ray source model parameters used in gCTD are obtained through a commissioning process. gCTD accuracy is demonstrated by comparing calculations with measurements in water and in CTDI phantoms. Twenty-five brain cancer patients are used to study dose in a standard-dose head protocol, and 25 prostate cancer patients are used to study dose in pelvis protocol and pelvis spotlight protocol. Mean dose to each organ is calculated. Mean dose to 2% voxels that have the highest dose is also computed to quantify the maximum dose. It is found that the mean dose value to an organ varies largely among patients. Moreover, dose distribution is highly non-homogeneous inside an organ. The maximum dose is found to be 1-3 times higher than the mean dose depending on the organ, and is up to eight times higher for the entire body due to the very high dose region in bony structures. High computational efficiency has also been observed in our studies, such that MC dose calculation time is less than 5 min for a typical case.
Monte Carlo and nonlinearities
Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian
2016-01-01
The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...
Directory of Open Access Journals (Sweden)
Charlie Samuya Veric
2001-12-01
Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.
Fragoso, Margarida; Wen, Ning; Kumar, Sanath; Liu, Dezhi; Ryu, Samuel; Movsas, Benjamin; Munther, Ajlouni; Chetty, Indrin J.
2010-08-01
Modern cancer treatment techniques, such as intensity-modulated radiation therapy (IMRT) and stereotactic body radiation therapy (SBRT), have greatly increased the demand for more accurate treatment planning (structure definition, dose calculation, etc) and dose delivery. The ability to use fast and accurate Monte Carlo (MC)-based dose calculations within a commercial treatment planning system (TPS) in the clinical setting is now becoming more of a reality. This study describes the dosimetric verification and initial clinical evaluation of a new commercial MC-based photon beam dose calculation algorithm, within the iPlan v.4.1 TPS (BrainLAB AG, Feldkirchen, Germany). Experimental verification of the MC photon beam model was performed with film and ionization chambers in water phantoms and in heterogeneous solid-water slabs containing bone and lung-equivalent materials for a 6 MV photon beam from a Novalis (BrainLAB) linear accelerator (linac) with a micro-multileaf collimator (m3 MLC). The agreement between calculated and measured dose distributions in the water phantom verification tests was, on average, within 2%/1 mm (high dose/high gradient) and was within ±4%/2 mm in the heterogeneous slab geometries. Example treatment plans in the lung show significant differences between the MC and one-dimensional pencil beam (PB) algorithms within iPlan, especially for small lesions in the lung, where electronic disequilibrium effects are emphasized. Other user-specific features in the iPlan system, such as options to select dose to water or dose to medium, and the mean variance level, have been investigated. Timing results for typical lung treatment plans show the total computation time (including that for processing and I/O) to be less than 10 min for 1-2% mean variance (running on a single PC with 8 Intel Xeon X5355 CPUs, 2.66 GHz). Overall, the iPlan MC algorithm is demonstrated to be an accurate and efficient dose algorithm, incorporating robust tools for MC-based
International Nuclear Information System (INIS)
Proton therapy facilities are shielded to limit the amount of secondary radiation to which patients, occupational workers and members of the general public are exposed. The most commonly applied shielding design methods for proton therapy facilities comprise semi-empirical and analytical methods to estimate the neutron dose equivalent. This study compares the results of these methods with a detailed simulation of a proton therapy facility by using the Monte Carlo technique. A comparison of neutron dose equivalent values predicted by the various methods reveals the superior accuracy of the Monte Carlo predictions in locations where the calculations converge. However, the reliability of the overall shielding design increases if simulation results, for which solutions have not converged, e.g. owing to too few particle histories, can be excluded, and deterministic models are being used at these locations. Criteria to accept or reject Monte Carlo calculations in such complex structures are not well understood. An optimum rejection criterion would allow all converging solutions of Monte Carlo simulation to be taken into account, and reject all solutions with uncertainties larger than the design safety margins. In this study, the optimum rejection criterion of 10% was found. The mean ratio was 26, 62% of all receptor locations showed a ratio between 0.9 and 10, and 92% were between 1 and 100. (authors)
Associative Interactions in Crowded Solutions of Biopolymers Counteract Depletion Effects.
Groen, Joost; Foschepoth, David; te Brinke, Esra; Boersma, Arnold J; Imamura, Hiromi; Rivas, Germán; Heus, Hans A; Huck, Wilhelm T S
2015-10-14
The cytosol of Escherichia coli is an extremely crowded environment, containing high concentrations of biopolymers which occupy 20-30% of the available volume. Such conditions are expected to yield depletion forces, which strongly promote macromolecular complexation. However, crowded macromolecule solutions, like the cytosol, are very prone to nonspecific associative interactions that can potentially counteract depletion. It remains unclear how the cytosol balances these opposing interactions. We used a FRET-based probe to systematically study depletion in vitro in different crowded environments, including a cytosolic mimic, E. coli lysate. We also studied bundle formation of FtsZ protofilaments under identical crowded conditions as a probe for depletion interactions at much larger overlap volumes of the probe molecule. The FRET probe showed a more compact conformation in synthetic crowding agents, suggesting strong depletion interactions. However, depletion was completely negated in cell lysate and other protein crowding agents, where the FRET probe even occupied slightly more volume. In contrast, bundle formation of FtsZ protofilaments proceeded as readily in E. coli lysate and other protein solutions as in synthetic crowding agents. Our experimental results and model suggest that, in crowded biopolymer solutions, associative interactions counterbalance depletion forces for small macromolecules. Furthermore, the net effects of macromolecular crowding will be dependent on both the size of the macromolecule and its associative interactions with the crowded background.
Geodesic Monte Carlo on Embedded Manifolds.
Byrne, Simon; Girolami, Mark
2013-12-01
Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton-Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024
Monte Carlo simulation of neutron scattering instruments
International Nuclear Information System (INIS)
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width
CosmoPMC: Cosmology Population Monte Carlo
Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren
2011-01-01
We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.
SPQR: a Monte Carlo reactor kinetics code
International Nuclear Information System (INIS)
The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations
Optical Fiber Turbidity Sensor Based on Monte Carlo Simulations%基于蒙特卡罗模拟的光纤浊度传感器
Institute of Scientific and Technical Information of China (English)
吴刚; 刘月明; 许宏志; 陈飞华; 黄杰
2014-01-01
在后向散射式浊度测量方法的基础上，采用光纤传感技术，设计了一种 Y形光纤束探头结构的浊度传感器，并在光纤束探头前端配置平面镜作为光反射配合目标。根据朗伯比尔定律通过实验研究了消光系数与浊度的线性关系，基于蒙特卡罗法建立了待测液中的光子散射模型，模拟不同检测情形下的传感器接收光强，优化得到光纤束到平面镜的最佳距离。标定接收光强与消光系数的关系曲线用于测量。此法简单高效，能检测消光系数低至0．059 cm-1的水质，平面镜的有效使用将传感器灵敏度提高10倍以上。此传感器可用于便携式检测，结合空分和时分复用技术可实现在线监测。%Based on the backscattering turbidity measurement method,a turbidity sensor with Y-shaped optical fiber bundle probe structure in conj unction with a plane mirror is designed by using the optical fiber sensor technolo-gy.Turbidity is estimated in terms of total interaction coefficient,a parameter that contains strong signature of the turbidity of a solution.A scattered light model based on Monte Carlo simulations is applied to estimate the power collected by the fiber optic probe.The turbidity sensor is simple,and it′s useful for detecting suspended impurities even in small quantities within a liquid,the total interaction coefficient of which is as low as 0.059 cm-1 .With the reasonable use of the mirror,the sensitivity of the sensor is improved more than 10 times.The proposed sensor can be used for the portable measurements and the on-line monitoring can be realized by combining the space-division multiplexing technology and time-division multiplexing technology.
Energy Technology Data Exchange (ETDEWEB)
Lee, C [Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, MD (United States); Badal, A [U.S. Food ' Drug Administration (CDRH/OSEL), Silver Spring, MD (United States)
2014-06-15
Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.
Energy Technology Data Exchange (ETDEWEB)
Sikora, M [Section for Biomedical Physics, University Hospital for Radiation Oncology, Hoppe-Seyler-Str. 3, 72076 Tuebingen (Germany); Dohm, O [Section for Biomedical Physics, University Hospital for Radiation Oncology, Hoppe-Seyler-Str. 3, 72076 Tuebingen (Germany); Alber, M [Section for Biomedical Physics, University Hospital for Radiation Oncology, Hoppe-Seyler-Str. 3, 72076 Tuebingen (Germany)
2007-08-07
A dedicated, efficient Monte Carlo (MC) accelerator head model for intensity modulated stereotactic radiosurgery treatment planning is needed to afford a highly accurate simulation of tiny IMRT fields. A virtual source model (VSM) of a mini multi-leaf collimator (MLC) (the Elekta Beam Modulator (EBM)) is presented, allowing efficient generation of particles even for small fields. The VSM of the EBM is based on a previously published virtual photon energy fluence model (VEF) (Fippel et al 2003 Med. Phys. 30 301) commissioned with large field measurements in air and in water. The original commissioning procedure of the VEF, based on large field measurements only, leads to inaccuracies for small fields. In order to improve the VSM, it was necessary to change the VEF model by developing (1) a method to determine the primary photon source diameter, relevant for output factor calculations (2) a model of the influence of the flattening filter on the secondary photon spectrum and (3) a more realistic primary photon spectrum. The VSM model is used to generate the source phase space data above the mini-MLC. Later the particles are transmitted through the mini-MLC by a passive filter function which significantly speeds up the time of generation of the phase space data after the mini-MLC, used for calculation of the dose distribution in the patient. The improved VSM model was commissioned for 6 and 15 MV beams. The results of MC simulation are in very good agreement with measurements. Less than 2% of local difference between the MC simulation and the diamond detector measurement of the output factors in water was achieved. The X, Y and Z profiles measured in water with an ion chamber (V = 0.125 cm{sup 3}) and a diamond detector were used to validate the models. An overall agreement of 2%/2 mm for high dose regions and 3%/2 mm in low dose regions between measurement and MC simulation for field sizes from 0.8 x 0.8 cm{sup 2} to 16 x 21 cm{sup 2} was achieved. An IMRT plan film
Monte Carlo methods beyond detailed balance
Schram, Raoul D.; Barkema, Gerard T.
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
Ozone Depletion from Nearby Supernovae
Gehrels, N; Jackman, C H; Cannizzo, J K; Mattson, B J; Chen, W; Gehrels, Neil; Laird, Claude M.; Jackman, Charles H.; Cannizzo, John K.; Mattson, Barbara J.; Chen, Wan
2003-01-01
Estimates made in the 1970's indicated that a supernova occurring within tens of parsecs of Earth could have significant effects on the ozone layer. Since that time, improved tools for detailed modeling of atmospheric chemistry have been developed to calculate ozone depletion, and advances have been made in theoretical modeling of supernovae and of the resultant gamma-ray spectra. In addition, one now has better knowledge of the occurrence rate of supernovae in the galaxy, and of the spatial distribution of progenitors to core-collapse supernovae. We report here the results of two-dimensional atmospheric model calculations that take as input the spectral energy distribution of a supernova, adopting various distances from Earth and various latitude impact angles. In separate simulations we calculate the ozone depletion due to both gamma-rays and cosmic rays. We find that for the combined ozone depletion roughly to double the ``biologically active'' UV flux received at the surface of the Earth, the supernova mu...
Ozone Depletion from Nearby Supernovae
Gehrels, Neil; Laird, Claude M.; Jackman, Charles H.; Cannizzo, John K.; Mattson, Barbara J.; Chen, Wan; Bhartia, P. K. (Technical Monitor)
2002-01-01
Estimates made in the 1970's indicated that a supernova occurring within tens of parsecs of Earth could have significant effects on the ozone layer. Since that time improved tools for detailed modeling of atmospheric chemistry have been developed to calculate ozone depletion, and advances have been made also in theoretical modeling of supernovae and of the resultant gamma ray spectra. In addition, one now has better knowledge of the occurrence rate of supernovae in the galaxy, and of the spatial distribution of progenitors to core-collapse supernovae. We report here the results of two-dimensional atmospheric model calculations that take as input the spectral energy distribution of a supernova, adopting various distances from Earth and various latitude impact angles. In separate simulations we calculate the ozone depletion due to both gamma rays and cosmic rays. We find that for the combined ozone depletion from these effects roughly to double the 'biologically active' UV flux received at the surface of the Earth, the supernova must occur at approximately or less than 8 parsecs.
HD depletion in starless cores
Sipilä, O; Harju, J
2013-01-01
Aims: We aim to investigate the abundances of light deuterium-bearing species such as HD, H2D+ and D2H+ in a gas-grain chemical model including an extensive description of deuterium and spin state chemistry, in physical conditions appropriate to the very centers of starless cores. Methods: We combine a gas-grain chemical model with radiative transfer calculations to simulate density and temperature structure in starless cores. The chemical model includes deuterated forms of species with up to 4 atoms and the spin states of the light species H2, H2+ and H3+ and their deuterated forms. Results: We find that HD eventually depletes from the gas phase because deuterium is efficiently incorporated to grain-surface HDO, resulting in inefficient HD production on grains. HD depletion has consequences not only on the abundances of e.g. H2D+ and D2H+, whose production depends on the abundance of HD, but also on the spin state abundance ratios of the various light species, when compared with the complete depletion model ...
Energy Technology Data Exchange (ETDEWEB)
Teymurazyan, A. [Imaging Research, Sunnybrook Health Sciences Centre, Department of Medical Biophysics, University of Toronto, Toronto M4N 3M5 (Canada); Rowlands, J. A. [Imaging Research, Sunnybrook Health Sciences Centre, Department of Medical Biophysics, University of Toronto, Toronto M4N 3M5 (Canada); Thunder Bay Regional Research Institute (TBRRI), Thunder Bay P7A 7T1 (Canada); Department of Radiation Oncology, University of Toronto, Toronto M5S 3E2 (Canada); Pang, G., E-mail: geordi.pang@sunnybrook.ca [Imaging Research, Sunnybrook Health Sciences Centre, Department of Medical Biophysics, University of Toronto, Toronto M4N 3M5 (Canada); Department of Radiation Oncology, University of Toronto, Toronto M5S 3E2 (Canada); Odette Cancer Centre, Toronto M4N 3M5 (Canada); Department of Physics, Ryerson University, Toronto M5B 2K3 (Canada)
2014-04-15
Purpose: Electronic Portal Imaging Devices (EPIDs) have been widely used in radiation therapy and are still needed on linear accelerators (Linacs) equipped with kilovoltage cone beam CT (kV-CBCT) or MRI systems. Our aim is to develop a new high quantum efficiency (QE) Čerenkov Portal Imaging Device (CPID) that is quantum noise limited at dose levels corresponding to a single Linac pulse. Methods: Recently a new concept of CPID for MV x-ray imaging in radiation therapy was introduced. It relies on Čerenkov effect for x-ray detection. The proposed design consisted of a matrix of optical fibers aligned with the incident x-rays and coupled to an active matrix flat panel imager (AMFPI) for image readout. A weakness of such design is that too few Čerenkov light photons reach the AMFPI for each incident x-ray and an AMFPI with an avalanche gain is required in order to overcome the readout noise for portal imaging application. In this work the authors propose to replace the optical fibers in the CPID with light guides without a cladding layer that are suspended in air. The air between the light guides takes on the role of the cladding layer found in a regular optical fiber. Since air has a significantly lower refractive index (∼1 versus 1.38 in a typical cladding layer), a much superior light collection efficiency is achieved. Results: A Monte Carlo simulation of the new design has been conducted to investigate its feasibility. Detector quantities such as quantum efficiency (QE), spatial resolution (MTF), and frequency dependent detective quantum efficiency (DQE) have been evaluated. The detector signal and the quantum noise have been compared to the readout noise. Conclusions: Our studies show that the modified new CPID has a QE and DQE more than an order of magnitude greater than that of current clinical systems and yet a spatial resolution similar to that of current low-QE flat-panel based EPIDs. Furthermore it was demonstrated that the new CPID does not require an
International Nuclear Information System (INIS)
Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head and neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head and neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU
Energy Technology Data Exchange (ETDEWEB)
Barrera, C A; Moran, M J
2007-08-21
The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS
International Nuclear Information System (INIS)
The Monte Carlo (MC) method has been shown through many research studies to calculate accurate dose distributions for clinical radiotherapy, particularly in heterogeneous patient tissues where the effects of electron transport cannot be accurately handled with conventional, deterministic dose algorithms. Despite its proven accuracy and the potential for improved dose distributions to influence treatment outcomes, the long calculation times previously associated with MC simulation rendered this method impractical for routine clinical treatment planning. However, the development of faster codes optimized for radiotherapy calculations and improvements in computer processor technology have substantially reduced calculation times to, in some instances, within minutes on a single processor. These advances have motivated several major treatment planning system vendors to embark upon the path of MC techniques. Several commercial vendors have already released or are currently in the process of releasing MC algorithms for photon and/or electron beam treatment planning. Consequently, the accessibility and use of MC treatment planning algorithms may well become widespread in the radiotherapy community. With MC simulation, dose is computed stochastically using first principles; this method is therefore quite different from conventional dose algorithms. Issues such as statistical uncertainties, the use of variance reduction techniques, the ability to account for geometric details in the accelerator treatment head simulation, and other features, are all unique components of a MC treatment planning algorithm. Successful implementation by the clinical physicist of such a system will require an understanding of the basic principles of MC techniques. The purpose of this report, while providing education and review on the use of MC simulation in radiotherapy planning, is to set out, for both users and developers, the salient issues associated with clinical implementation and
U.S. Geological Survey, Department of the Interior — This data release includes a polygon shapefile of grid cells attributed with values representing the simulated base-flow, evapotranspiration, and...
Optimization of Monte Carlo simulations
Bryskhe, Henrik
2009-01-01
This thesis considers several different techniques for optimizing Monte Carlo simulations. The Monte Carlo system used is Penelope but most of the techniques are applicable to other systems. The two mayor techniques are the usage of the graphics card to do geometry calculations, and raytracing. Using graphics card provides a very efficient way to do fast ray and triangle intersections. Raytracing provides an approximation of Monte Carlo simulation but is much faster to perform. A program was ...
Directory of Open Access Journals (Sweden)
Pedro Pablo Ferrer Gallego
2012-07-01
Full Text Available RESUMEN: Se presenta y comenta un conjunto de cartas enviadas por Carlos Vicioso a Carlos Pau durante su estancia en Bicorp (Valencia entre 1914 y 1915. Las cartas se encuentran depositadas en el Archivo Histórico del Instituto Botánico de Barcelona. Esta correspondencia epistolar marca el comienzo de la relación científica entre Vicioso y Pau, basada en un primer momento en las consultas que le hace Vicioso al de Segorbe para la determinación de las especies que a través de pliegos de herbario envía desde su estancia en la localidad valenciana. En la actualidad estos pliegos testigo se encuentran conservados en diferentes herbarios oficiales nacionales y también extranjeros, fruto del envío e intercambio de material entre Vicioso y otros botánicos de la época, principalmente con Pau, Sennen y Font Quer.ABSTRACT: Epistolary correspondece between Carlos Vicioso and Carlos Pau during a stay in Bicorp (Valencia. A set of letters sent from Carlos Vicioso to Carlos Pau during a stay in Bicorp (Valencia between 1914 and 1915 are here presented and discussed. The letters are located in the Archivo Histórico del Instituto Botánico de Barcelona. This lengthy correspondence among the authors shows the beginning of their scientific relationship. At first, the correspondence was based on the consults from Vicioso to Pau for the determination of the species which were sent from Bicorp, in the herbarium sheets. Nowadays, these witness sheets are preserved in the national and international herbaria, thanks to the botanical material exchange among Vicioso and other botanist of the time, mainly with Pau, Sennen and Font Quer.
Nasir, M.; Pratama, D.; Anam, C.; Haryanto, F.
2016-03-01
The aim of this research was to calculate Size Specific Dose Estimates (SSDE) generated by the varian OBI CBCT v1.4 X-ray tube working at 100 kV using EGSnrc Monte Carlo simulations. The EGSnrc Monte Carlo code used in this simulation was divided into two parts. Phase space file data resulted by the first part simulation became an input to the second part. This research was performed with varying phantom diameters of 5 to 35 cm and varying phantom lengths of 10 to 25 cm. Dose distribution data were used to calculate SSDE values using trapezoidal rule (trapz) function in a Matlab program. SSDE obtained from this calculation was compared to that in AAPM report and experimental data. It was obtained that the normalization of SSDE value for each phantom diameter was between 1.00 and 3.19. The normalization of SSDE value for each phantom length was between 0.96 and 1.07. The statistical error in this simulation was 4.98% for varying phantom diameters and 5.20% for varying phantom lengths. This study demonstrated the accuracy of the Monte Carlo technique in simulating the dose calculation. In the future, the influence of cylindrical phantom material to SSDE would be studied.
Energy Technology Data Exchange (ETDEWEB)
Ureba, A.; Palma, B. A.; Leal, A.
2011-07-01
Develop a more efficient method of optimization in relation to time, based on linear programming designed to implement a multi objective penalty function which also permits a simultaneous solution integrated boost situations considering two white volumes simultaneously.
García de la Torre, Nuria; Durán, Alejandra; Del Valle, Laura; Fuentes, Manuel; Barca, Idoya; Martín, Patricia; Montañez, Carmen; Perez-Ferre, Natalia; Abad, Rosario; Sanz, Fuencisla; Galindo, Mercedes; Rubio, Miguel A; Calle-Pascual, Alfonso L
2013-08-01
The aims are to define the regression rate in newly diagnosed type 2 diabetes after lifestyle intervention and pharmacological therapy based on a SMBG (self-monitoring of blood glucose) strategy in routine practice as compared to standard HbA1c-based treatment and to assess whether a supervised exercise program has additional effects. St Carlos study is a 3-year, prospective, randomized, clinic-based, interventional study with three parallel groups. Hundred and ninety-five patients were randomized to the SMBG intervention group [I group; n = 130; Ia: SMBG (n = 65) and Ib: SMBG + supervised exercise (n = 65)] and to the HbA1c control group (C group) (n = 65). The primary outcome was to estimate the regression rate of type 2 diabetes (HbA1c 4 kg was 3.6 (1.8-7); p < 0.001. This study shows that the use of SMBG in an educational program effectively increases the regression rate in newly diagnosed type 2 diabetic patients after 3 years of follow-up. These data suggest that SMBG-based programs should be extended to primary care settings where diabetic patients are usually attended.
Energy Technology Data Exchange (ETDEWEB)
Arreola V, G. [IPN, Escuela Superior de Fisica y Matematicas, Posgrado en Ciencias Fisicomatematicas, area en Ingenieria Nuclear, Unidad Profesional Adolfo Lopez Mateos, Edificio 9, Col. San Pedro Zacatenco, 07730 Mexico D. F. (Mexico); Vazquez R, R.; Guzman A, J. R., E-mail: energia.arreola.uam@gmail.com [Universidad Autonoma Metropolitana, Unidad Iztapalapa, Area de Ingenieria en Recursos Energeticos, Av. San Rafael Atlixco 186, Col. Vicentina, 09340 Mexico D. F. (Mexico)
2012-10-15
In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., {mu}{omicron}=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)
International Nuclear Information System (INIS)
This report contains an engineering analysis of long-term storage of uranium metal in boxes as an option for long-term management of depleted uranium hexafluoride (UF6). Three storage facilities are considered: buildings, vaults, and mined cavities. Three cases are considered: either all, half, or a quarter of the depleted uranium metal that would be produced from the conversion of depleted UF6 is stored at the facility. The analysis of these alternatives is based on a box design used in the Final Programmatic Environmental Impact Statement for Alternative Strategies for the Long-Term Management and Use of Depleted Uranium Hexafluoride, report DOE/EIS-0269, published in 1999 by the US Department of Energy. This box design does not appear to effectively use space within the box. Hence, an alternative box design that allows for a reduced storage area is addressed in the appendices for long-term storage in buildings
Lim, Wei Kang; Denton, Alan R.
2016-01-01
Depletion forces and macromolecular crowding govern the structure and function of biopolymers in biological cells and the properties of polymer nanocomposite materials. To isolate and analyze the influence of polymer shape fluctuations and penetrability on depletion-induced interactions and crowding by nanoparticles, we model polymers as effective penetrable ellipsoids, whose shapes fluctuate according to the probability distributions of the eigenvalues of the gyration tensor of an ideal random walk. Within this model, we apply Monte Carlo simulation methods to compute the depletion-induced potential of mean force between hard nanospheres and crowding-induced shape distributions of polymers in the protein limit, in which polymer coils can be easily penetrated by smaller nanospheres. By comparing depletion potentials from simulations of ellipsoidal and spherical polymer models with predictions of polymer field theory and free-volume theory, we show that polymer depletion-induced interactions and crowding depend sensitively on polymer shapes and penetrability, with important implications for bulk thermodynamic phase behavior.
Lambright, W. Henry
2005-01-01
While the National Aeronautics and Space Administration (NASA) is widely perceived as a space agency, since its inception NASA has had a mission dedicated to the home planet. Initially, this mission involved using space to better observe and predict weather and to enable worldwide communication. Meteorological and communication satellites showed the value of space for earthly endeavors in the 1960s. In 1972, NASA launched Landsat, and the era of earth-resource monitoring began. At the same time, in the late 1960s and early 1970s, the environmental movement swept throughout the United States and most industrialized countries. The first Earth Day event took place in 1970, and the government generally began to pay much more attention to issues of environmental quality. Mitigating pollution became an overriding objective for many agencies. NASA's existing mission to observe planet Earth was augmented in these years and directed more toward environmental quality. In the 1980s, NASA sought to plan and establish a new environmental effort that eventuated in the 1990s with the Earth Observing System (EOS). The Agency was able to make its initial mark via atmospheric monitoring, specifically ozone depletion. An important policy stimulus in many respects, ozone depletion spawned the Montreal Protocol of 1987 (the most significant international environmental treaty then in existence). It also was an issue critical to NASA's history that served as a bridge linking NASA's weather and land-resource satellites to NASA s concern for the global changes affecting the home planet. Significantly, as a global environmental problem, ozone depletion underscored the importance of NASA's ability to observe Earth from space. Moreover, the NASA management team's ability to apply large-scale research efforts and mobilize the talents of other agencies and the private sector illuminated its role as a lead agency capable of crossing organizational boundaries as well as the science-policy divide.
Appleyard, S.; Cook, T.
2009-05-01
The combined effects of low rainfall, groundwater withdrawal in excess of 300 GL/year and reduced recharge in areas covered by pine plantations has caused the water table in a sandy unconfined aquifer on the Gnangara Mound in Western Australia to drop by up to 5 m and aquifer storage to decline by about 500 GL over the last 20 years. Groundwater has become acidic in areas of high drawdown, with pH values typically being less than 5.0 at the water table, and elevated concentrations of SO4 2-, Al, Fe, Zn, Cu, Ni and Pb. Trends of increasing acidity and base cation concentrations in deep water supply wells in the Mirrabooka wellfield indicate that about 0.7 keq/ha/year of base cations are being leached from soil within cones of depression of pumping wells. These results indicate that the assessment of the sustainable yields of aquifers under conditions of low rainfall needs to consider geochemical interactions between groundwater, aquifer sediments, soils and vegetation, and not be just based on aquifer hydraulics and water-balance changes.
Reconstruction of Monte Carlo replicas from Hessian parton distributions
Hou, Tie-Jiun; Huston, Joey; Nadolsky, Pavel; Schmidt, Carl; Stump, Daniel; Wang, Bo-Ting; Xie, Ke-Ping; Dulat, Sayipjamal; Pumplin, Jon; Yuan, C -P
2016-01-01
We explore connections between two common methods for quantifying the uncertainty in parton distribution functions (PDFs), based on the Hessian error matrix and Monte-Carlo sampling. CT14 parton distributions in the Hessian representation are converted into Monte-Carlo replicas by a numerical method that reproduces important properties of CT14 Hessian PDFs: the asymmetry of CT14 uncertainties and positivity of individual parton distributions. The ensembles of CT14 Monte-Carlo replicas constructed this way at NNLO and NLO are suitable for various collider applications, such as cross section reweighting. Master formulas for computation of asymmetric standard deviations in the Monte-Carlo representation are derived. A numerical program is made available for conversion of Hessian PDFs into Monte-Carlo replicas according to normal, log-normal, and Watt-Thorne sampling procedures.
Krakowska, B; Custers, D; Deconinck, E; Daszykowski, M
2016-02-01
The aim of this work was to develop a general framework for the validation of discriminant models based on the Monte Carlo approach that is used in the context of authenticity studies based on chromatographic impurity profiles. The performance of the validation approach was applied to evaluate the usefulness of the diagnostic logic rule obtained from the partial least squares discriminant model (PLS-DA) that was built to discriminate authentic Viagra® samples from counterfeits (a two-class problem). The major advantage of the proposed validation framework stems from the possibility of obtaining distributions for different figures of merit that describe the PLS-DA model such as, e.g., sensitivity, specificity, correct classification rate and area under the curve in a function of model complexity. Therefore, one can quickly evaluate their uncertainty estimates. Moreover, the Monte Carlo model validation allows balanced sets of training samples to be designed, which is required at the stage of the construction of PLS-DA and is recommended in order to obtain fair estimates that are based on an independent set of samples. In this study, as an illustrative example, 46 authentic Viagra® samples and 97 counterfeit samples were analyzed and described by their impurity profiles that were determined using high performance liquid chromatography with photodiode array detection and further discriminated using the PLS-DA approach. In addition, we demonstrated how to extend the Monte Carlo validation framework with four different variable selection schemes: the elimination of uninformative variables, the importance of a variable in projections, selectivity ratio and significance multivariate correlation. The best PLS-DA model was based on a subset of variables that were selected using the variable importance in the projection approach. For an independent test set, average estimates with the corresponding standard deviation (based on 1000 Monte Carlo runs) of the correct
Energy Technology Data Exchange (ETDEWEB)
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Salmon, Stefanie J.; Adriaanse, Marieke A.; De Vet, Emely; Fennis, Bob M.; De Ridder, Denise T. D.
2014-01-01
Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In thre
Monte Carlo methods for electromagnetics
Sadiku, Matthew NO
2009-01-01
Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...
Fortrie, R.; Todorova, T. K.; Ganduglia-Pirovano, M. V.; Sauer, J.
2009-01-01
Periodic DFT calculations on VOx/κ-Al2O3 (001) surfaces are used for parametrizing Monte Carlo simulations performed on a mesoscopic scale surface sample. This procedure gives access to new structural and chemical information. In this work, we focus on the reducibility of the surface and on the determination of the fraction of vanadium sites that are oxidized under catalytic conditions. A vanadium coverage range is characterized for which the catalyst is thermodynamically stable and especially reactive in oxidation reactions. The partial reduction of the surface under catalytic conditions is also studied.
Glaser, Adam K.; Kanick, Stephen C.; Zhang, Rongxiao; Arce, Pedro; Pogue, Brian W.
2013-01-01
We describe a tissue optics plug-in that interfaces with the GEANT4/GAMOS Monte Carlo (MC) architecture, providing a means of simulating radiation-induced light transport in biological media for the first time. Specifically, we focus on the simulation of light transport due to the Čerenkov effect (light emission from charged particle’s traveling faster than the local speed of light in a given medium), a phenomenon which requires accurate modeling of both the high energy particle and subsequen...
The 1988 Antarctic ozone depletion - Comparison with previous year depletions
Schoeberl, Mark R.; Stolarski, Richard S.; Krueger, Arlin J.
1989-01-01
The 1988 spring Antarctic ozone depletion was observed by TOMS to be substantially smaller than in recent years. The minimum polar total ozone values declined only 15 percent during September 1988, compared to nearly 50 percent during September 1987. At southern midlatitudes, exceptionally high total ozone values were recorded beginning in July 1988. The total integrated southern hemispheric ozone increased rapidly during the Austral spring, approaching 1980 levels during October. The high midlatitude total ozone values were associated with a substantial increase in eddy activity as indicated by the standard deviation in total ozone in the zonal band 30-60 deg S. Mechanisms through which the increased midlatitude eddy activity could disrupt the formation of the Antarctic ozone hole are briefly discussed.
Energy Technology Data Exchange (ETDEWEB)
Arribas de Paz, L. M.; Garcia Barquero, C.; Navarro Montesinos, J.; Cuerva Tejero, A.; Cruz Cruz, I.; Roque Lopez, V.; Marti Perez, I. [Ciemat. Madrid (Spain)
2000-07-01
The objective of the work is to model wind field in the surroundings of the Spanish Antarctic Base (BAE in the following). The need of such a work comes from the necessity of an energy source able to supply the energy demand in the BAE during the Antarctic winter. When the BAE is in operation (in the Antarctic summer) the energy supply comes from a diesel engine. In the Antartic winter the base is closed, but the demand of energy supply is growing up every year because of the increase in the number of technical and scientific machines that remain in the BAE taking different measurements. For this purpose the top of a closed hill called Pico Radio, not perturbed by close obstacles, has been chosen as the better site for the measurements. The measurement station is made up with a sonic an-emometer and a small wind generator to supply the energy needed by the sensors head heating of the anemometer. this way, it will be also used as a proof for the suitability of a wind generator in the new chosen site, under those special climactic conditions. (Author) 3 refs.
Global Depletion of Groundwater Resources: Past and Future Analyses
Bierkens, M. F.; de Graaf, I. E. M.; Van Beek, L. P.; Wada, Y.
2014-12-01
Globally, about 17% of the crops are irrigated, yet irrigation accounts for 40% of the global food production. As more than 40% of irrigation water comes from groundwater, groundwater abstraction rates are large and exceed natural recharge rates in many regions of the world, thus leading to groundwater depletion. In this paper we provide an overview of recent research on global groundwater depletion. We start with presenting various estimates of global groundwater depletion, both from flux based as well as volume based methods. We also present estimates of the contribution of non-renewable groundwater to irrigation water consumption and how this contribution developed during the last 50 years. Next, using a flux based method, we provide projections of groundwater depletion for the coming century under various socio-economic and climate scenarios. As groundwater depletion contributes to sea-level rise, we also provide estimates of this contribution from the past as well as for future scenarios. Finally, we show recent results of groundwater level changes and change in river flow as a result of global groundwater abstractions as obtained from a global groundwater flow model.
Pieber, Simone; Ragossnig, Arne; Pomberger, Roland; Curtis, Alexander
2012-04-01
Mechanical processing using predominantly particle size and density as separation criteria is currently applied in the production of solid-recovered fuel or refuse-derived fuel. It does not sufficiently allow for the optimization of the quality of heterogeneous solid waste for subsequent energy recovery. Material-specific processing, in contrast, allows the separation criterion to be linked to specific chemical constituents. Therefore, the technical applicability of material-specific sorting of heterogeneous waste, in order to optimize its routing options, was evaluated. Two sorting steps were tested on a pilot and a large scale. Near infrared multiplexed sensor-based sorting devices were used (1) to reduce the chlorine (Cl) respectively pollutant content, in order to broaden the utilization options of SRF in industrial co-incineration, and (2) to increase the biogenic carbon (C(bio)) content, which is highly relevant in the light of the EU emission trading scheme on CO₂. It was found that the technology is generally applicable for the heterogeneous waste fractions looked at, if the sensor systems are appropriately adjusted for the sorting task. The first sorting step allowed for the removal of up to 40% of the Cl freight by separating only 3 to 5% of the material mass. Very low Cl concentrations were achieved in the output stream to be used as solid-recovered fuel stream and additionally, the cadmium (Cd) and lead (Pb) concentration was decreased. A two- to four-fold enriched C(bio) content was achieved by the second sorting step. Due to lower yields in the large-scale test further challenges need to be addressed. PMID:22363024
Molecular beam depletion: a new approach
Dorado, Manuel
2014-01-01
During the last years some interesting experimental results have been reported for experiments in N20, N0 , N0 dimer , H2 , Toluene and BaFCH3 cluster. The main result consists in the observation of molecular beam depletion when the molecules of a pulsed beam interact with a static electric or magnetic field and an oscillating field (RF). In these cases, and as a main difference, instead of using four fields as in the original technique developed by I.I. Rabi and others, only two fields, those which configure the resonant unit, are used. That is, without using the nonhomogeneous magnetic fields. The depletion explanation for I.I. Rabi and others is based in the interaction between the molecular electric or magnetic dipole moment and the non-homogeneous fields. But, obviously, the change in the molecules trajectories observed on these new experiments has to be explained without considering the force provided by the field gradient because it happens without using non-homogeneous fields. In this paper a theoreti...
Active volume studies with depleted and enriched BEGe detectors
Energy Technology Data Exchange (ETDEWEB)
Sturm, Katharina von [Eberhard Karls Universitaet Tuebingen (Germany); Universita degli Studi di Padova, Padua (Italy); Collaboration: GERDA-Collaboration
2013-07-01
The Gerda experiment is currently taking data for the search of the 0νββ decay in {sup 76}Ge. In 2013, 30 newly manufactured Broad Energy Germanium (BEGe) diodes will be deployed which will double the active mass within Gerda. These detectors were fabricated from high-purity germanium enriched in {sup 76}Ge and tested in the HADES underground laboratory, owned by SCK.CEN, in Mol, Belgium. As the BEGes are source and detector at the same time, one crucial parameter is their active volume which directly enters into the evaluation of the half-life. This talk illustrates the dead layer and active volume determination of prototype detectors from depleted germanium as well as the newly produced detectors from enriched material, using gamma spectroscopy methods and comparing experimental results to Monte-Carlo simulations. Recent measurements and their results are presented, and systematic effects are discussed.
Energy Technology Data Exchange (ETDEWEB)
Moradmand Jalali, Hamed; Bashiri, Hadis, E-mail: hbashiri@kashanu.ac.ir; Rasa, Hossein
2015-05-01
In the present study, the mechanism of free radical production by light-reflective agents in sunscreens (TiO{sub 2}, ZnO and ZrO{sub 2}) was obtained by applying kinetic Monte Carlo simulation. The values of the rate constants for each step of the suggested mechanism have been obtained by simulation. The effect of the initial concentration of mineral oxides and uric acid on the rate of uric acid photo-oxidation by irradiation of some sun care agents has been studied. The kinetic Monte Carlo simulation results agree qualitatively with the existing experimental data for the production of free radicals by sun care agents. - Highlights: • The mechanism and kinetics of uric acid photo-oxidation by irradiation of sun care agents has been obtained by simulation. • The mechanism has been used for free radical production of TiO{sub 2} (rutile and anatase), ZnO and ZrO{sub 2}. • The ratios of photo-activity of ZnO to anastase, rutile and ZrO have been obtained. • By doubling the initial concentrations of mineral oxide, the rate of reaction was doubled. • The optimum ratio of initial concentration of mineral oxides to uric acid has been obtained.
International Nuclear Information System (INIS)
In the present study, the mechanism of free radical production by light-reflective agents in sunscreens (TiO2, ZnO and ZrO2) was obtained by applying kinetic Monte Carlo simulation. The values of the rate constants for each step of the suggested mechanism have been obtained by simulation. The effect of the initial concentration of mineral oxides and uric acid on the rate of uric acid photo-oxidation by irradiation of some sun care agents has been studied. The kinetic Monte Carlo simulation results agree qualitatively with the existing experimental data for the production of free radicals by sun care agents. - Highlights: • The mechanism and kinetics of uric acid photo-oxidation by irradiation of sun care agents has been obtained by simulation. • The mechanism has been used for free radical production of TiO2 (rutile and anatase), ZnO and ZrO2. • The ratios of photo-activity of ZnO to anastase, rutile and ZrO have been obtained. • By doubling the initial concentrations of mineral oxide, the rate of reaction was doubled. • The optimum ratio of initial concentration of mineral oxides to uric acid has been obtained
Depleted uranium disposal options evaluation
International Nuclear Information System (INIS)
The Department of Energy (DOE), Office of Environmental Restoration and Waste Management, has chartered a study to evaluate alternative management strategies for depleted uranium (DU) currently stored throughout the DOE complex. Historically, DU has been maintained as a strategic resource because of uses for DU metal and potential uses for further enrichment or for uranium oxide as breeder reactor blanket fuel. This study has focused on evaluating the disposal options for DU if it were considered a waste. This report is in no way declaring these DU reserves a ''waste,'' but is intended to provide baseline data for comparison with other management options for use of DU. To PICS considered in this report include: Retrievable disposal; permanent disposal; health hazards; radiation toxicity and chemical toxicity
Depleted Argon from Underground Sources
International Nuclear Information System (INIS)
Argon is a strong scintillator and an ideal target for Dark Matter detection; however 39Ar contamination in atmospheric argon from cosmic ray interactions limits the size of liquid argon dark matter detectors due to pile-up. Argon from deep underground is depleted in 39Ar due to the cosmic ray shielding of the earth. In Cortez, Colorado, a CO2 well has been discovered to contain approximately 600 ppm of argon as a contamination in the CO2. We first concentrate the argon locally to 3% in an Ar, N2, and He mixture, from the CO2 through chromatographic gas separation, and then the N2 and He will be removed by continuous distillation to purify the argon. We have collected 26 kg of argon from the CO2 facility and a cryogenic distillation column is under construction at Fermilab to further purify the argon.
Parallelizing Monte Carlo with PMC
Energy Technology Data Exchange (ETDEWEB)
Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.
1994-11-01
PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.
Depleted uranium. Nuclear related problems
International Nuclear Information System (INIS)
Depleted uranium (DU) has found a military application in Golf War, in Bosnia and in Yugoslavia (Kosovo). In military sense it was very efficient. But the fact that some parts of that ammunition are manufactured from depleted uranium, low level radioactive waste, implies other aspects of this application like radiological, ecological, jurist, ethical and psychological. The subject of this paper is just physical aspect. There are several problems concerning this aspect: production of DU, total amount of DU in the world, 235U/238U relation, radioactivity of DU, measurements, and presence of other radionuclides like plutonium. DU is by product of nuclear technology and represents low-level nuclear waste. Therefore it should be stored. Total amount of DU in the world is about one million tons with an annual increase of 30 000 t. The content of 235U in DU can vary in the range 0.16-0.3%. The total radioactivity of DU is a consequence of 7 radionuclides and amounts 39.42 Bq/mg. This include alpha, beta and gamma radioactivity. Because of characteristics of this radioactivity it is difficult to prospect the terrain except at the site of action. During the impact of DU rods four types of DU particles could be produced: whole penetrators, penetrator parts, big aerosols (>10 μm) and small aerosols (<10 μm). Most of these particles fall locally, although some of them could be find several tens of kilometers away. All these problems have been discussed in this paper. (author)
Institute of Scientific and Technical Information of China (English)
陈强; 赵航; 姚春德; 邹铁方
2012-01-01
To make the accident reconstruction results more in line with the actual situation of traffic accidents, the computing process was optimized. Based on Monte Carlo simulation optimization method and random weighting, an improved algorithm of accident reconstruction analysis optimization was put forward. In the algorithm, the two-dimensional collision model and the vehicle trajectory model were used for the calculation model,the position of the collision point,the speed before collision, the normal coefficient of restitution were selected as optimization parameters, and the actual vehicle collision trajectory deviation was selected as the optimization objective. The algorithm proposed in this paper and the Monte Carlo optimization algorithm of Pc-Crash were used to calculate a certain accident example. The results show that the algorithm proposed in this paper is superior to the Monte Carlo optimization algorithm of Pc-Crash in accuracy and stability. Not only optimal accident reconstruction results, but also the probability of accident reconstruction results fall on arbitrary interval can be obtained using the improved accident reconstruction optimization algorithm based on Monte Carlo simulation.%为使事故再现结果更符合实际情况,对事故再现中的计算过程进行优化.基于蒙特卡罗方法和随机加权方法,提出一种改进的事故再现蒙特卡罗优化算法.该算法以二维碰撞模型和车辆轨迹模型为计算模型,选择碰撞点位置、碰撞前速度、法向恢复系数为优化参数,以实际车辆碰撞后运动轨迹离差最小为优化目标.分别用所提出的改进算法和Pc-Crash中的优化方法对一算例进行优化.结果表明,改进算法在准确度和稳定性等方面优于Pc-Crash中的方法.利用改进的事故再现蒙特卡罗优化算法,不仅能获得最优的事故再现结果,还能获得再现结果落在任意区间的概率.
Energy Technology Data Exchange (ETDEWEB)
Rinkel, J.; Dinten, J.M.; Tabary, J
2004-07-01
The use of focused anti-scatter grids on digital radiographic systems with two-dimensional detectors produces acquisitions with a decreased scatter to primary ratio and thus improved contrast and resolution. Simulation software is of great interest in optimizing grid configuration according to a specific application. Classical simulators are based on complete detailed geometric descriptions of the grid. They are accurate but very time consuming since they use Monte Carlo code to simulate scatter within the high-frequency grids. We propose a new practical method which couples an analytical simulation of the grid interaction with a radiographic system simulation program. First, a two dimensional matrix of probability depending on the grid is created offline, in which the first dimension represents the angle of impact with respect to the normal to the grid lines and the other the energy of the photon. This matrix of probability is then used by the Monte Carlo simulation software in order to provide the final scattered flux image. To evaluate the gain of CPU time, we define the increasing factor as the increase of CPU time of the simulation with as opposed to without the grid. Increasing factors were calculated with the new model and with classical methods representing the grid with its CAD model as part of the object. With the new method, increasing factors are shorter by one to two orders of magnitude compared with the second one. These results were obtained with a difference in calculated scatter of less than five percent between the new and the classical method. (authors)
Energy Technology Data Exchange (ETDEWEB)
Kurosu, K [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Takashina, M; Koizumi, M [Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Das, I; Moskvin, V [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)
2014-06-01
Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health
The lund Monte Carlo for jet fragmentation
International Nuclear Information System (INIS)
We present a Monte Carlo program based on the Lund model for jet fragmentation. Quark, gluon, diquark and hadron jets are considered. Special emphasis is put on the fragmentation of colour singlet jet systems, for which energy, momentum and flavour are conserved explicitly. The model for decays of unstable particles, in particular the weak decay of heavy hadrons, is described. The central part of the paper is a detailed description on how to use the FORTRAN 77 program. (Author)
Monte Carlo Simulations of Star Clusters
Giersz, M
2000-01-01
A revision of Stod\\'o{\\l}kiewicz's Monte Carlo code is used to simulate evolution of large star clusters. The survey on the evolution of multi-mass N-body systems influenced by the tidal field of a parent galaxy and by stellar evolution is discussed. For the first time, the simulation on the "star-by-star" bases of evolution of 1,000,000 body star cluster is presented. \\
Lookahead Strategies for Sequential Monte Carlo
Lin, Ming; Chen, Rong; Liu, Jun
2013-01-01
Based on the principles of importance sampling and resampling, sequential Monte Carlo (SMC) encompasses a large set of powerful techniques dealing with complex stochastic dynamic systems. Many of these systems possess strong memory, with which future information can help sharpen the inference about the current state. By providing theoretical justification of several existing algorithms and introducing several new ones, we study systematically how to construct efficient SMC algorithms to take ...
Monte Carlo methods for preference learning
DEFF Research Database (Denmark)
Viappiani, P.
2012-01-01
Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....
Energy Technology Data Exchange (ETDEWEB)
Nomoto, S. [Japan Oil Development Co. Ltd., Tokyo (Japan); Fujita, K. [The University of Tokyo, Tokyo (Japan)
1997-05-01
Data for large giant oil fields with minable reserves of one billion barrels or more were accumulated to structure a new oil field depletion model and estimate production in each oil field. As a result of analyzing events recognized in large giant oil fields, necessity was made clear to correct the conventional oil depletion model. The newly proposed model changes definitions on the depletion period of time, depletion rate, build-up production (during a time period in which production rate increases) and production in a plateau (a time period in which production becomes constant). Two hundred and twenty-five large giant oil fields were classified into those in a depletion period, an initial development phase, and a plateau period. The following findings were obtained as a result of trial calculations using the new model: under an assumption of demand growth rate of 1.5%, oil field groups in the initial development phase will reach the plateau production in the year 2002, and oil fields in the depletion period will continue production decline, hence the production amount after that year will slow down. Because the oil field groups in the plateau period will shift into decline in 2014, the overall production will decrease. The year 2014 is about ten years later than the estimation given recently by Campbell. Undiscovered resources are outside these discussions. 11 refs., 9 figs., 2 tabs.
Fortrie, Rémy; Todorova, Tanya K.; Ganduglia-Pirovano, M. Verónica; Sauer, Joachim
2008-12-01
Periodic density functional theory (DFT) calculations concerning VOx/κ-Al2O3(001) surfaces are used for parametrizing Monte Carlo simulations performed on a mesoscopic scale surface sample (41.95×48.80 nm2). New structural and chemical information are then obtained that are not accessible from DFT calculations: segregation, short- and long-range ordering, and the effect of the temperature on the reducibility. The reducibility of the surface is investigated for locating chemical potential regions where the catalyst is especially reactive for oxidation reactions. Comparison with the V2O5(001) surface at catalytic conditions (800 K, 1 bar) is performed. The reducibility exhibits an unexpectedly strong temperature dependence.
Qin, Mingpu; Zhang, Shiwei
2016-01-01
The vast majority of quantum Monte Carlo (QMC) calculations in interacting fermion systems require a constraint to control the sign problem. The constraint involves an input trial wave function which restricts the random walks. We introduce a systematically improvable constraint which relies on the fundamental role of the density or one-body density matrix. An independent-particle calculation is coupled to an auxiliary-field QMC calculation. The independent-particle solution is used as the constraint in QMC, which then produces the input density or density matrix for the next iteration. The constraint is optimized by the self-consistency between the many-body and independent-particle calculations. The approach is demonstrated in the two-dimensional Hubbard model by accurately determining the spin densities when collective modes separated by tiny energy scales are present in the magnetic and charge correlations. Our approach also provides an ab initio way to predict effective "U" parameters for independent-par...
Gereben, Orsolya; Petkov, Valeri
2013-11-01
A new method to fit experimental diffraction data with non-periodic structure models for spherical particles was implemented in the reverse Monte Carlo simulation code. The method was tested on x-ray diffraction data for ruthenium (Ru) nanoparticles approximately 5.6 nm in diameter. It was found that the atomic ordering in the ruthenium nanoparticles is quite distorted, barely resembling the hexagonal structure of bulk Ru. The average coordination number for the bulk decreased from 12 to 11.25. A similar lack of structural order has been observed with other nanoparticles (e.g. Petkov et al 2008 J. Phys. Chem. C 112 8907-11) indicating that atomic disorder is a widespread feature of nanoparticles less than 10 nm in diameter.
Yang, Ye; Soyemi, Olusola O.; Landry, Michelle R.; Soller, Babs R.
2005-01-01
The influence of fat thickness on the diffuse reflectance spectra of muscle in the near infrared (NIR) region is studied by Monte Carlo simulations of a two-layer structure and with phantom experiments. A polynomial relationship was established between the fat thickness and the detected diffuse reflectance. The influence of a range of optical coefficients (absorption and reduced scattering) for fat and muscle over the known range of human physiological values was also investigated. Subject-to-subject variation in the fat optical coefficients and thickness can be ignored if the fat thickness is less than 5 mm. A method was proposed to correct the fat thickness influence. c2005 Optical Society of America.
Tack, Pieter; Cotte, Marine; Bauters, Stephen; Brun, Emmanuel; Banerjee, Dipanjan; Bras, Wim; Ferrero, Claudio; Delattre, Daniel; Mocella, Vito; Vincze, Laszlo
2016-02-01
The writing in carbonized Herculaneum scrolls, covered and preserved by the pyroclastic events of the Vesuvius in 79 AD, was recently revealed using X-ray phase-contrast tomography, without the need of unrolling the sensitive scrolls. Unfortunately, some of the text is difficult to read due to the interference of the papyrus fibers crossing the written text vertically and horizontally. Recently, lead was found as an elemental constituent in the writing, rendering the text more clearly readable when monitoring the lead X-ray fluorescence signal. Here, several hypotheses are postulated for the origin and state of lead in the papyrus writing. Multi-scale X-ray fluorescence micro-imaging, Monte Carlo quantification and X-ray absorption microspectroscopy experiments are used to provide additional information on the ink composition, in an attempt to determine the origin of the lead in the Herculaneum scrolls and validate the postulated hypotheses.
Directory of Open Access Journals (Sweden)
Hai-Feng Zhang
2016-08-01
Full Text Available In this paper, the properties of photonic band gaps (PBGs in two types of two-dimensional plasma-dielectric photonic crystals (2D PPCs under a transverse-magnetic (TM wave are theoretically investigated by a modified plane wave expansion (PWE method where Monte Carlo method is introduced. The proposed PWE method can be used to calculate the band structures of 2D PPCs which possess arbitrary-shaped filler and any lattice. The efficiency and convergence of the present method are discussed by a numerical example. The configuration of 2D PPCs is the square lattices with fractal Sierpinski gasket structure whose constituents are homogeneous and isotropic. The type-1 PPCs is filled with the dielectric cylinders in the plasma background, while its complementary structure is called type-2 PPCs, in which plasma cylinders behave as the fillers in the dielectric background. The calculated results reveal that the enough accuracy and good convergence can be obtained, if the number of random sampling points of Monte Carlo method is large enough. The band structures of two types of PPCs with different fractal orders of Sierpinski gasket structure also are theoretically computed for a comparison. It is demonstrate that the PBGs in higher frequency region are more easily produced in the type-1 PPCs rather than in the type-2 PPCs. Sierpinski gasket structure introduced in the 2D PPCs leads to a larger cutoff frequency, enhances and induces more PBGs in high frequency region. The effects of configurational parameters of two types of PPCs on the PBGs are also investigated in detail. The results show that the PBGs of the PPCs can be easily manipulated by tuning those parameters. The present type-1 PPCs are more suitable to design the tunable compacted devices.
Ojala, Jarkko; Kapanen, Mika; Hyödynmaa, Simo
2016-06-01
New version 13.6.23 of the electron Monte Carlo (eMC) algorithm in Varian Eclipse™ treatment planning system has a model for 4MeV electron beam and some general improvements for dose calculation. This study provides the first overall accuracy assessment of this algorithm against full Monte Carlo (MC) simulations for electron beams from 4MeV to 16MeV with most emphasis on the lower energy range. Beams in a homogeneous water phantom and clinical treatment plans were investigated including measurements in the water phantom. Two different material sets were used with full MC: (1) the one applied in the eMC algorithm and (2) the one included in the Eclipse™ for other algorithms. The results of clinical treatment plans were also compared to those of the older eMC version 11.0.31. In the water phantom the dose differences against the full MC were mostly less than 3% with distance-to-agreement (DTA) values within 2mm. Larger discrepancies were obtained in build-up regions, at depths near the maximum electron ranges and with small apertures. For the clinical treatment plans the overall dose differences were mostly within 3% or 2mm with the first material set. Larger differences were observed for a large 4MeV beam entering curved patient surface with extended SSD and also in regions of large dose gradients. Still the DTA values were within 3mm. The discrepancies between the eMC and the full MC were generally larger for the second material set. The version 11.0.31 performed always inferiorly, when compared to the 13.6.23. PMID:27189311
Clodronate treatment significantly depletes macrophages in chickens
Kameka, Amber M.; Haddadi, Siamak; Jamaldeen, Fathima Jesreen; Moinul, Prima; He, Xiao T.; Nawazdeen, Fathima Hafsa P.; Bonfield, Stephan; Sharif, Shayan; van Rooijen, Nico; Abdul-Careem, Mohamed Faizal
2014-01-01
Macrophages function as phagocytes and antigen-presenting cells in the body. As has been demonstrated in mammals, administration of clodronate [dichloromethylene bisphosphonate (Cl2MBP)] encapsulated liposomes results in depletion of macrophages. Although this compound has been used in chickens, its effectiveness in depleting macrophages has yet to be fully determined. Here, we show that a single administration of clodronate liposomes to chickens results in a significant depletion of macropha...
Repulsive depletion interactions in colloid polymer mixtures
Rudhardt, Daniel; Bechinger, Clemens; Leiderer, Paul
1999-01-01
Depletion forces in colloidal systems are known to be entirely attractive, as long as the background of macromolecules is small enough that an ideal gas approach is valid. At higher densities, however, structural correlation effects of the macromolecules which lead to additional repulsive parts in the depletion interaction, have to be taken into account. We have measured the depletion interaction between a single polystyrene sphere and a wall in the presence of non-ionic polymer coils. Althou...
DEPLETION POTENTIAL OF COLLOIDS:A DIRECT SIMULATION STUDY
Institute of Scientific and Technical Information of China (English)
LI; Wei-hua(
2001-01-01
［1］Asakura S, Oosawa F. Surface tension of high-poly-mer solution [J]. J Chem Phys, 1954, 22: 1255～ 1255.［2］Ye X, Narayanan T, Tong P, et al. Depletion interactions in colloid-polymer mixtures [J]. Phys Rev E, 1996, 54: 6500～6510.［3］Kaplan P D, Faucheux L P, Libchaber A J. Direct observation of the entropic potential in a binary suspension [J]. Phys Rev Lett, 1994, 73: 2793～2796.［4］Ohshima Y N, Sakagami H, Okumoto K, et al. Direct measurement of infinite simal depletion force in a colloid-polymer mixture by laser radiation pressure [J]. Phys Rev Lett, 1997, 78: 3963～3966.［5］Dinsmore A D, Yodh A G, Pine D J. Entropic control particle motion using passive surface microstructures [J]. Nature (London), 1996, 383: 239～242.［6］Dinsmore A D, Wong D T, Nelson P, et al. Hard spheres in vecicles: curvature-induced forces and particle-induced curvature [J]. Phys Rev Lett, 1998, 80: 409～412.［7］Gtzelmann B, Evans R, Dietrich S. Depletion forces in fluids [J]. Phys Rev E, 1998, 57: 6785～6800.［8］Miao Y, Cates M E, Lekkerkerker H N W. Depletion force in colloidal systems [J]. Physica A, 1995, 222: 10～24.［9］Biben J, Bladon P, Frenkel D. Depletion effects in binary hard-sphere fluids [J]. J Phys: Condens Matter, 1996, 8: 10799～10821.［10］Dickman R, Attard P, Simonian V. Entropic forces in binary hard sphere mixture: Theory and simulation [J]. J Chem Phys, 1997, 107: 205～213.［11］Bennett C H. Efficient estimation of free energy differences from Monte Carlo data [J]. J Comput Phys, 1976, 22: 245～268; see also Allen M P, Tildesley D J. Computer Simulation of Liquids (Chap.7) [M]. Oxford: Clarendon Press. 1994.
TARC: Carlo Rubbia's Energy Amplifier
Laurent Guiraud
1997-01-01
Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.
50 CFR 216.15 - Depleted species.
2010-10-01
... Assistant Administrator as depleted under the provisions of the MMPA. (a) Hawaiian monk seal (Monachus schauinslandi). (b) Bowhead whale (Balaena mysticetus). (c) North Pacific fur seal (Callorhinus...
Ego depletion increases risk-taking.
Fischer, Peter; Kastenmüller, Andreas; Asal, Kathrin
2012-01-01
We investigated how the availability of self-control resources affects risk-taking inclinations and behaviors. We proposed that risk-taking often occurs from suboptimal decision processes and heuristic information processing (e.g., when a smoker suppresses or neglects information about the health risks of smoking). Research revealed that depleted self-regulation resources are associated with reduced intellectual performance and reduced abilities to regulate spontaneous and automatic responses (e.g., control aggressive responses in the face of frustration). The present studies transferred these ideas to the area of risk-taking. We propose that risk-taking is increased when individuals find themselves in a state of reduced cognitive self-control resources (ego-depletion). Four studies supported these ideas. In Study 1, ego-depleted participants reported higher levels of sensation seeking than non-depleted participants. In Study 2, ego-depleted participants showed higher levels of risk-tolerance in critical road traffic situations than non-depleted participants. In Study 3, we ruled out two alternative explanations for these results: neither cognitive load nor feelings of anger mediated the effect of ego-depletion on risk-taking. Finally, Study 4 clarified the underlying psychological process: ego-depleted participants feel more cognitively exhausted than non-depleted participants and thus are more willing to take risks. Discussion focuses on the theoretical and practical implications of these findings. PMID:22931000
Characterization of a Depleted Monolithic Active Pixel Sensor (DMAPS) prototype
Obermann, T.; Havranek, M.; Hemperek, T.; Hügging, F.; Kishishita, T.; Krüger, H.; Marinas, C.; Wermes, N.
2015-03-01
New monolithic pixel detectors integrating CMOS electronics and sensor on the same silicon substrate are currently explored for particle tracking in future HEP experiments, most notably at the LHC . The innovative concept of Depleted Monolithic Active Pixel Sensors (DMAPS) is based on high resistive silicon bulk material enabling full substrate depletion and the application of an electrical drift field for fast charge collection, while retaining full CMOS capability for the electronics. The technology (150 nm) used offers quadruple wells and allows to implement the pixel electronics with independently isolated N- and PMOS transistors. Results of initial studies on the charge collection and sensor performance are presented.
Characterization of a Depleted Monolithic Active Pixel Sensor (DMAPS) prototype
International Nuclear Information System (INIS)
New monolithic pixel detectors integrating CMOS electronics and sensor on the same silicon substrate are currently explored for particle tracking in future HEP experiments, most notably at the LHC . The innovative concept of Depleted Monolithic Active Pixel Sensors (DMAPS) is based on high resistive silicon bulk material enabling full substrate depletion and the application of an electrical drift field for fast charge collection, while retaining full CMOS capability for the electronics. The technology (150 nm) used offers quadruple wells and allows to implement the pixel electronics with independently isolated N- and PMOS transistors. Results of initial studies on the charge collection and sensor performance are presented
International Nuclear Information System (INIS)
Improving the prediction of radiation parameters and reliability of fuel behaviour under different irradiation modes is particularly relevant for new fuel compositions, including recycled nuclear fuel. For fast reactors there is a strong dependence of nuclide accumulations on the nuclear data libraries. The effect of fission yield libraries on irradiated fuel is studied in MONTEBURNS-MCNP5-ORIGEN2 calculations of sodium fast reactors. Fission yield libraries are generated for sodium fast reactors with MOX fuel, using ENDF/B-VII.0, JEFF3.1, original library FY-Koldobsky, and GEFY 3.3 as sources. The transport libraries are generated from ENDF/B-VII.0 and JEFF-3.1. Analysis of irradiated MOX fuel using different fission yield libraries demonstrates the considerable spread in concentrations of fission products. The discrepancies in concentrations of inert gases being ∼25%, up to 5 times for stable and long-life nuclides, and up to 10 orders of magnitude for short-lived nuclides. (authors)
Measuring the reliability of MCMC inference with bidirectional Monte Carlo
Grosse, Roger B.; Ancha, Siddharth; Roy, Daniel M.
2016-01-01
Markov chain Monte Carlo (MCMC) is one of the main workhorses of probabilistic inference, but it is notoriously hard to measure the quality of approximate posterior samples. This challenge is particularly salient in black box inference methods, which can hide details and obscure inference failures. In this work, we extend the recently introduced bidirectional Monte Carlo technique to evaluate MCMC-based posterior inference algorithms. By running annealed importance sampling (AIS) chains both ...
Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations
Delyon, François; Holzmann, Markus
2016-01-01
Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.
Public Infrastructure for Monte Carlo Simulation: publicMC@BATAN
Waskita, A A; Akbar, Z; Handoko, L T; 10.1063/1.3462759
2010-01-01
The first cluster-based public computing for Monte Carlo simulation in Indonesia is introduced. The system has been developed to enable public to perform Monte Carlo simulation on a parallel computer through an integrated and user friendly dynamic web interface. The beta version, so called publicMC@BATAN, has been released and implemented for internal users at the National Nuclear Energy Agency (BATAN). In this paper the concept and architecture of publicMC@BATAN are presented.
Monte Carlo method for solving a parabolic problem
Directory of Open Access Journals (Sweden)
Tian Yi
2016-01-01
Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.
Monte Carlo Simulation of Optical Properties of Wake Bubbles
Institute of Scientific and Technical Information of China (English)
CAO Jing; WANG Jiang-An; JIANG Xing-Zhou; SHI Sheng-Wei
2007-01-01
Based on Mie scattering theory and the theory of multiple light scattering, the light scattering properties of air bubbles in a wake are analysed by Monte Carlo simulation. The results show that backscattering is enhanced obviously due to the existence of bubbles, especially with the increase of bubble density, and that it is feasible to use the Monte Carlo method to study the properties of light scattering by air bubbles.
Depleted argon from underground sources
Energy Technology Data Exchange (ETDEWEB)
Back, H.O.; /Princeton U.; Alton, A.; /Augustana U. Coll.; Calaprice, F.; Galbiati, C.; Goretti, A.; /Princeton U.; Kendziora, C.; /Fermilab; Loer, B.; /Princeton U.; Montanari, D.; /Fermilab; Mosteiro, P.; /Princeton U.; Pordes, S.; /Fermilab
2011-09-01
Argon is a powerful scintillator and an excellent medium for detection of ionization. Its high discrimination power against minimum ionization tracks, in favor of selection of nuclear recoils, makes it an attractive medium for direct detection of WIMP dark matter. However, cosmogenic {sup 39}Ar contamination in atmospheric argon limits the size of liquid argon dark matter detectors due to pile-up. The cosmic ray shielding by the earth means that Argon from deep underground is depleted in {sup 39}Ar. In Cortez Colorado a CO{sub 2} well has been discovered to contain approximately 500ppm of argon as a contamination in the CO{sub 2}. In order to produce argon for dark matter detectors we first concentrate the argon locally to 3-5% in an Ar, N{sub 2}, and He mixture, from the CO{sub 2} through chromatographic gas separation. The N{sub 2} and He will be removed by continuous cryogenic distillation in the Cryogenic Distillation Column recently built at Fermilab. In this talk we will discuss the entire extraction and purification process; with emphasis on the recent commissioning and initial performance of the cryogenic distillation column purification.
Monte Carlo dose mapping on deforming anatomy
Zhong, Hualiang; Siebers, Jeffrey V.
2009-10-01
This paper proposes a Monte Carlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxel's deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patient's IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patient's entire dose region, respectively. As a 4D Monte Carlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D Monte Carlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA.
Alonso, Patricia; Iriondo, José María
2014-01-01
The Germplasm Bank of Universidad Rey Juan Carlos was created in 2008 and currently holds 235 accessions and 96 species. This bank focuses on the conservation of wild-plant communities and aims to conserve ex situ a representative sample of the plant biodiversity present in a habitat, emphasizing priority ecosystems identified by the Habitats Directive. It is also used to store plant material for research and teaching purposes. The collection consists of three subcollections, two representative of typical habitats in the center of the Iberian Peninsula: high-mountain pastures (psicroxerophylous pastures) and semi-arid habitats (gypsophylic steppes), and a third representative of the genus Lupinus. The high-mountain subcollection currently holds 153 accessions (63 species), the semi-arid subcollection has 76 accessions (29 species,) and the Lupinus subcollection has 6 accessions (4 species). All accessions are stored in a freezer at -18 °C in Kilner jars with silica gel. The Germplasm Bank of Universidad Rey Juan Carlos follows a quality control protocol which describes the workflow performed with seeds from seed collection to storage. All collectors are members of research groups with great experience in species identification. Herbarium specimens associated with seed accessions are preserved and 63% of the records have been georreferenced with GPS and radio points. The dataset provides unique information concerning the location of populations of plant species that form part of the psicroxerophylous pastures and gypsophylic steppes of Central Spain as well as populations of genus Lupinus in the Iberian Peninsula. It also provides relevant information concerning mean seed weight and seed germination values under specific incubation conditions. This dataset has already been used by researchers of the Area of Biodiversity and Conservation of URJC as a source of information for the design and implementation of experimental designs in these plant communities. Since
Institute of Scientific and Technical Information of China (English)
李满仓; 王侃; 姚栋
2012-01-01
两步法反应堆物理计算流程中,组件均匀化群常数显著影响堆芯计算精度.相比确定论方法,连续能量蒙特卡罗方法均匀化精确描述各种几何构型栅格,避免繁琐共振自屏计算,保留更多连续能量信息,不仅产生的群常数更精确,而且普适性也更强.作为实现连续能量蒙特卡罗组件均匀化的第一步,本文应用径迹长度方法统计计算一般群截面和群常数,提出并使用散射事件方法获得不能直接应用确定论方法计算群间散射截面和高阶勒让德系数,应用P1截面计算扩散系数.为还原两步法计算流程中组件在堆芯的临界状态,本文应用BN理论对均匀化群常数进行泄漏修正.在4种类型组件和简化压水堆堆芯上数值验证蒙特卡罗均匀化群常数.验证结果表明:连续能量蒙特卡罗方法组件均匀化群常数具有良好几何适应性,显著提高堆芯计算精度.%The efficiency of the standard two-step reactor physics calculation relies on the accuracy of multi-group constants from the assembly-level homogenization process. In contrast to the traditional deterministic methods, generating the homogenization cross sections via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data bank can be used for a wide range of applications, resulting in the versatility using Monte Carlo codes for homogenization. As the first stage to realize Monte Carlo based lattice homogenization, the track length scheme is used as the foundation of cross section generation, which is straight forward. The scattering matrix and Legendre components, however, require special techniques. The Scattering Event method was proposed to solve the problem. There are no continuous energy counterparts in the Monte Carlo calculation for neutron diffusion coefficients. P1 cross sections were used to calculate the diffusion
Oil depletion and terms of trade
Irimia-Vladu, Marina; Thompson, Henry
2007-01-01
A model of the international oil market model with optimal depletion and offer curves suggests importers face a backward bending offer curve. An oil tariff would then raise oil imports and lower the price of oil including the tariff. Simulations of price and extraction paths for the coming century provide insight into the future of oil depletion and terms of trade.