MCOR - Monte Carlo depletion code for reference LWR calculations
Energy Technology Data Exchange (ETDEWEB)
Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)
2011-04-15
Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Grand Canonical Ensemble Monte Carlo Simulation of Depletion Interactions in Colloidal Suspensions
Institute of Scientific and Technical Information of China (English)
GUO Ji-Yuan; XIAO Chang-Ming
2008-01-01
Depletion interactions in colloidal suspensions confined between two parallel plates are investigated by using acceptance ratio method with grand canonical ensemble Monte Carlo simulation.The numerical results show that both the depletion potential and depletion force are affected by the confinement from the two parallel plates.Furthermore,it is found that in the grand canonical ensemble Monte Carlo simulation,the depletion interactions are strongly affected by the generalized chemical potential.
SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON
Energy Technology Data Exchange (ETDEWEB)
Goluoglu, Sedat [ORNL; Bekar, Kursat B [ORNL; Wiarda, Dorothea [ORNL
2012-01-01
The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.
Fensin, Michael Lorne
and supported radiation transport code, for further development of a Monte Carlo-linked depletion methodology which is essential to the future development of advanced reactor technologies that exceed the limitations of current deterministic based methods.
Institute of Scientific and Technical Information of China (English)
XIAO Chang-Ming; GUO Ji-Yuan; HU Ping
2006-01-01
@@ According to the acceptance ratio method, the influences on the depletion interactions between a large sphere and a plate from another closely placed large sphere are studied by Monte Carlo simulation. The numerical results show that both the depletion potential and depletion force are affected by the presence of the closely placed large sphere; the closer the large sphere are placed to them, the larger the influence will be. Furthermore, the influences on the depletion interactions from another large sphere are more sensitive to the angle than to the distance.
Directory of Open Access Journals (Sweden)
Jingang Liang
2016-06-01
Full Text Available Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC codes in accomplishing pin-wise three-dimensional (3D full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Adding trend data to Depletion-Based Stock Reduction Analysis
National Oceanic and Atmospheric Administration, Department of Commerce — A Bayesian model of Depletion-Based Stock Reduction Analysis (DB-SRA), informed by a time series of abundance indexes, was developed, using the Sampling Importance...
Qin, Jianguo; Liu, Rong; Zhu, Tonghua; Zhang, Xinwei; Ye, Bangjiao
2015-01-01
To overcome the problem of inefficient computing time and unreliable results in MCNP5 calculation, a two-step method is adopted to calculate the energy deposition of prompt gamma-rays in detectors for depleted uranium spherical shells under D-T neutrons irradiation. In the first step, the gamma-ray spectrum for energy below 7 MeV is calculated by MCNP5 code; secondly, the electron recoil spectrum in a BC501A liquid scintillator detector is simulated based on EGSnrc Monte Carlo Code with the gamma-ray spectrum from the first step as input. The comparison of calculated results with experimental ones shows that the simulations agree well with experiment in the energy region 0.4-3 MeV for the prompt gamma-ray spectrum and below 4 MeVee for the electron recoil spectrum. The reliability of the two-step method in this work is validated.
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Accelerated GPU based SPECT Monte Carlo simulations
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
CERN Summer Student Report 2016 Monte Carlo Data Base Improvement
Caciulescu, Alexandru Razvan
2016-01-01
During my Summer Student project I worked on improving the Monte Carlo Data Base and MonALISA services for the ALICE Collaboration. The project included learning the infrastructure for tracking and monitoring of the Monte Carlo productions as well as developing a new RESTful API for seamless integration with the JIRA issue tracking framework.
Dashora, Nirvikar
2012-07-01
Equatorial plasma bubble (EPB) and associated plasma irregularities are known to cause severe scintillation for the satellite signals and produce range errors, which eventually result either in loss of lock of the signal or in random fluctuation in TEC, respectively, affecting precise positioning and navigation solutions. The EPBs manifest as sudden reduction in line of sight TEC, which are more often called TEC depletions, and are spread over thousands of km in meridional direction and a few hundred km in zonal direction. They change shape and size while drifting from one longitude to another in nighttime ionosphere. For a satellite based navigation system, like GAGAN in India that depends upon (i) multiple satellites (i.e. GPS) (ii) multiple ground reference stations and (iii) a near real time data processing, such EPBs are of grave concern. A TEC model generally provides a near real-time grid based ionospheric vertical errors (GIVEs) over hypothetically spread 5x5 degree latitude-longitude grid points. But, on night when a TEC depletion occurs in a given longitude sector, it is almost impossible for any system to give a forecast of GIVEs. If loss-of-lock events occur due to scintillation, there is no way to improve the situation. But, when large and random depletions in TEC occur with scintillations and without loss-of-lock, it affects low latitude TEC in two ways. (a) Multiple satellites show depleted TEC which may be very different from model-TEC values and hence the GIVE would be incorrect over various grid points (ii) the user may be affected by depletions which are not sampled by reference stations and hence interpolated GIVE within one square would be grossly erroneous. The most general solution (and the far most difficult as well) is having advance knowledge of spatio-temporal occurrence and precise magnitude of such depletions. While forecasting TEC depletions in spatio-temporal domain are a scientific challenge (as we show below), operational systems
Monte-Carlo simulation-based statistical modeling
Chen, John
2017-01-01
This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.
DEFF Research Database (Denmark)
Stenbæk, D S; Einarsdottir, H S; Goregliad-Fjaellingsdal, T
2016-01-01
Acute Tryptophan Depletion (ATD) is a dietary method used to modulate central 5-HT to study the effects of temporarily reduced 5-HT synthesis. The aim of this study is to evaluate a novel method of ATD using a gelatin-based collagen peptide (CP) mixture. We administered CP-Trp or CP+Trp mixtures...
Monte Carlo based radial shield design of typical PWR reactor
Energy Technology Data Exchange (ETDEWEB)
Gul, Anas; Khan, Rustam; Qureshi, M. Ayub; Azeem, Muhammad Waqar; Raza, S.A. [Pakistan Institute of Engineering and Applied Sciences, Islamabad (Pakistan). Dept. of Nuclear Engineering; Stummer, Thomas [Technische Univ. Wien (Austria). Atominst.
2016-11-15
Neutron and gamma flux and dose equivalent rate distribution are analysed in radial and shields of a typical PWR type reactor based on the Monte Carlo radiation transport computer code MCNP5. The ENDF/B-VI continuous energy cross-section library has been employed for the criticality and shielding analysis. The computed results are in good agreement with the reference results (maximum difference is less than 56 %). It implies that MCNP5 a good tool for accurate prediction of neutron and gamma flux and dose rates in radial shield around the core of PWR type reactors.
Quantitative Monte Carlo-based holmium-166 SPECT reconstruction
Energy Technology Data Exchange (ETDEWEB)
Elschot, Mattijs; Smits, Maarten L. J.; Nijsen, Johannes F. W.; Lam, Marnix G. E. H.; Zonnenberg, Bernard A.; Bosch, Maurice A. A. J. van den; Jong, Hugo W. A. M. de [Department of Radiology and Nuclear Medicine, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands); Viergever, Max A. [Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht (Netherlands)
2013-11-15
Purpose: Quantitative imaging of the radionuclide distribution is of increasing interest for microsphere radioembolization (RE) of liver malignancies, to aid treatment planning and dosimetry. For this purpose, holmium-166 ({sup 166}Ho) microspheres have been developed, which can be visualized with a gamma camera. The objective of this work is to develop and evaluate a new reconstruction method for quantitative {sup 166}Ho SPECT, including Monte Carlo-based modeling of photon contributions from the full energy spectrum.Methods: A fast Monte Carlo (MC) simulator was developed for simulation of {sup 166}Ho projection images and incorporated in a statistical reconstruction algorithm (SPECT-fMC). Photon scatter and attenuation for all photons sampled from the full {sup 166}Ho energy spectrum were modeled during reconstruction by Monte Carlo simulations. The energy- and distance-dependent collimator-detector response was modeled using precalculated convolution kernels. Phantom experiments were performed to quantitatively evaluate image contrast, image noise, count errors, and activity recovery coefficients (ARCs) of SPECT-fMC in comparison with those of an energy window-based method for correction of down-scattered high-energy photons (SPECT-DSW) and a previously presented hybrid method that combines MC simulation of photopeak scatter with energy window-based estimation of down-scattered high-energy contributions (SPECT-ppMC+DSW). Additionally, the impact of SPECT-fMC on whole-body recovered activities (A{sup est}) and estimated radiation absorbed doses was evaluated using clinical SPECT data of six {sup 166}Ho RE patients.Results: At the same noise level, SPECT-fMC images showed substantially higher contrast than SPECT-DSW and SPECT-ppMC+DSW in spheres ≥17 mm in diameter. The count error was reduced from 29% (SPECT-DSW) and 25% (SPECT-ppMC+DSW) to 12% (SPECT-fMC). ARCs in five spherical volumes of 1.96–106.21 ml were improved from 32%–63% (SPECT-DSW) and 50%–80
Too exhausted to remember: ego depletion undermines subsequent event-based prospective memory.
Li, Jian-Bin; Nie, Yan-Gang; Zeng, Min-Xia; Huntoon, Meghan; Smith, Jessi L
2013-01-01
Past research has consistently found that people are likely to do worse on high-level cognitive tasks after exerting self-control on previous actions. However, little has been unraveled about to what extent ego depletion affects subsequent prospective memory. Drawing upon the self-control strength model and the relationship between self-control resources and executive control, this study proposes that the initial actions of self-control may undermine subsequent event-based prospective memory (EBPM). Ego depletion was manipulated through watching a video requiring visual attention (Experiment 1) or completing an incongruent Stroop task (Experiment 2). Participants were then tested on EBPM embedded in an ongoing task. As predicted, the results showed that after ruling out possible intervening variables (e.g. mood, focal and nonfocal cues, and characteristics of ongoing task and ego depletion task), participants in the high-depletion condition performed significantly worse on EBPM than those in the low-depletion condition. The results suggested that the effect of ego depletion on EBPM was mainly due to an impaired prospective component rather than to a retrospective component.
GPU-Monte Carlo based fast IMRT plan optimization
Directory of Open Access Journals (Sweden)
Yongbao Li
2014-03-01
Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z
The New MCNP6 Depletion Capability
Energy Technology Data Exchange (ETDEWEB)
Fensin, Michael Lorne [Los Alamos National Laboratory; James, Michael R. [Los Alamos National Laboratory; Hendricks, John S. [Los Alamos National Laboratory; Goorley, John T. [Los Alamos National Laboratory
2012-06-19
The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.
Ab initio based Monte Carlo studies of Cu-depleted CIS phases for solar cells
Energy Technology Data Exchange (ETDEWEB)
Ludwig, Christian; Gruhn, Thomas; Felser, Claudia [Institut fuer Anorganische and Analytische Chemie, Johannes Gutenberg-Universitaet Mainz (Germany); Windeln, Johannes [IBM Mainz (Germany)
2011-07-01
Thin film solar cells with a CuInSe{sub 2} (CIS) absorber layer have an increasing share of the solar cell market because of their low production costs and the high efficiency. One interesting aspect of CIS is the inherent resilience to defects and composition fluctuations. Beside the stable CuInSe{sub 2} phase, there are various Cu-poor phases along the Cu{sub 2}Se-In{sub 2}Se{sub 3} tie line, including the CuIn{sub 3}Se{sub 5} and the CuIn{sub 5}Se{sub 8} phase. We have used ab initio calculations of Cu-poor CIS configurations to make a cluster expansion of the configurational energy. In the configurations, Cu atoms, In atoms, and vacancies are distributed over the Cu and In sites of a CIS cell with fixed Se atoms. With the resulting energy expression, CuIn{sub 3}Se{sub 5} and CuIn{sub 5}Se{sub 8} systems have been studied in the canonical ensemble. By analyzing the free energy landscape the transition temperature between a low-temperature ordered and a high-temperature disordered CuIn{sub 5}Se{sub 8} phase has been determined. Furthermore, grandcanonical ensemble simulations have been carried out, which provide the equilibrium Cu and In concentrations as a function of the chemical potentials {mu}{sub Cu} and {mu}{sub In}. Plateau regions for the CuInSe{sub 2} and the CuIn{sub 5}Se{sub 8} phases have been found and analyzed for different temperatures.
A Monte Carlo-based model of gold nanoparticle radiosensitization
Lechtman, Eli Solomon
The goal of radiotherapy is to operate within the therapeutic window - delivering doses of ionizing radiation to achieve locoregional tumour control, while minimizing normal tissue toxicity. A greater therapeutic ratio can be achieved by utilizing radiosensitizing agents designed to enhance the effects of radiation at the tumour. Gold nanoparticles (AuNP) represent a novel radiosensitizer with unique and attractive properties. AuNPs enhance local photon interactions, thereby converting photons into localized damaging electrons. Experimental reports of AuNP radiosensitization reveal this enhancement effect to be highly sensitive to irradiation source energy, cell line, and AuNP size, concentration and intracellular localization. This thesis explored the physics and some of the underlying mechanisms behind AuNP radiosensitization. A Monte Carlo simulation approach was developed to investigate the enhanced photoelectric absorption within AuNPs, and to characterize the escaping energy and range of the photoelectric products. Simulations revealed a 10 3 fold increase in the rate of photoelectric absorption using low-energy brachytherapy sources compared to megavolt sources. For low-energy sources, AuNPs released electrons with ranges of only a few microns in the surrounding tissue. For higher energy sources, longer ranged photoelectric products travelled orders of magnitude farther. A novel radiobiological model called the AuNP radiosensitization predictive (ARP) model was developed based on the unique nanoscale energy deposition pattern around AuNPs. The ARP model incorporated detailed Monte Carlo simulations with experimentally determined parameters to predict AuNP radiosensitization. This model compared well to in vitro experiments involving two cancer cell lines (PC-3 and SK-BR-3), two AuNP sizes (5 and 30 nm) and two source energies (100 and 300 kVp). The ARP model was then used to explore the effects of AuNP intracellular localization using 1.9 and 100 nm Au
MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks
Directory of Open Access Journals (Sweden)
Zhaoyan Jin
2013-10-01
Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works
Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming
2014-12-29
The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium.
Energy Technology Data Exchange (ETDEWEB)
Ondis, L.A., II; Tyburski, L.J.; Moskowitz, B.S.
2000-03-01
The RCP01 Monte Carlo program is used to analyze many geometries of interest in nuclear design and analysis of light water moderated reactors such as the core in its pressure vessel with complex piping arrangement, fuel storage arrays, shipping and container arrangements, and neutron detector configurations. Written in FORTRAN and in use on a variety of computers, it is capable of estimating steady state neutron or photon reaction rates and neutron multiplication factors. The energy range covered in neutron calculations is that relevant to the fission process and subsequent slowing-down and thermalization, i.e., 20 MeV to 0 eV. The same energy range is covered for photon calculations.
Application of Photon Transport Monte Carlo Module with GPU-based Parallel System
Energy Technology Data Exchange (ETDEWEB)
Park, Chang Je [Sejong University, Seoul (Korea, Republic of); Shon, Heejeong [Golden Eng. Co. LTD, Seoul (Korea, Republic of); Lee, Donghak [CoCo Link Inc., Seoul (Korea, Republic of)
2015-05-15
In general, it takes lots of computing time to get reliable results in Monte Carlo simulations especially in deep penetration problems with a thick shielding medium. To mitigate such a weakness of Monte Carlo methods, lots of variance reduction algorithms are proposed including geometry splitting and Russian roulette, weight windows, exponential transform, and forced collision, etc. Simultaneously, advanced computing hardware systems such as GPU(Graphics Processing Units)-based parallel machines are used to get a better performance of the Monte Carlo simulation. The GPU is much easier to access and to manage when comparing a CPU cluster system. It also becomes less expensive these days due to enhanced computer technology. There, lots of engineering areas adapt GPU-bases massive parallel computation technique. based photon transport Monte Carlo method. It provides almost 30 times speedup without any optimization and it is expected almost 200 times with fully supported GPU system. It is expected that GPU system with advanced parallelization algorithm will contribute successfully for development of the Monte Carlo module which requires quick and accurate simulations.
Accuracy Analysis for 6-DOF PKM with Sobol Sequence Based Quasi Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
Jianguang Li; Jian Ding; Lijie Guo; Yingxue Yao; Zhaohong Yi; Huaijing Jing; Honggen Fang
2015-01-01
To improve the precisions of pose error analysis for 6⁃dof parallel kinematic mechanism ( PKM) during assembly quality control, a Sobol sequence based on Quasi Monte Carlo ( QMC) method is introduced and implemented in pose accuracy analysis for the PKM in this paper. The Sobol sequence based on Quasi Monte Carlo with the regularity and uniformity of samples in high dimensions, can prevail traditional Monte Carlo method with up to 98�59% and 98�25% enhancement for computational precision of pose error statistics. Then a PKM tolerance design system integrating this method is developed and with it pose error distributions of the PKM within a prescribed workspace are finally obtained and analyzed.
Energy Technology Data Exchange (ETDEWEB)
Lopez-Tarjuelo, J.; Garcia-Molla, R.; Suan-Senabre, X. J.; Quiros-Higueras, J. Q.; Santos-Serra, A.; Marco-Blancas, N.; Calzada-Feliu, S.
2013-07-01
It has been done the acceptance for use clinical Monaco computerized planning system, based on an on a virtual model of the energy yield of the head of the linear electron Accelerator and that performs the calculation of the dose with an algorithm of x-rays (XVMC) based on Monte Carlo algorithm. (Author)
Sampling uncertainty evaluation for data acquisition board based on Monte Carlo method
Ge, Leyi; Wang, Zhongyu
2008-10-01
Evaluating the data acquisition board sampling uncertainty is a difficult problem in the field of signal sampling. This paper analyzes the sources of dada acquisition board sampling uncertainty in the first, then introduces a simulation theory of dada acquisition board sampling uncertainty evaluation based on Monte Carlo method and puts forward a relation model of sampling uncertainty results, sampling numbers and simulation times. In the case of different sample numbers and different signal scopes, the author establishes a random sampling uncertainty evaluation program of a PCI-6024E data acquisition board to execute the simulation. The results of the proposed Monte Carlo simulation method are in a good agreement with the GUM ones, and the validities of Monte Carlo method are represented.
A new Monte Carlo simulation model for laser transmission in smokescreen based on MATLAB
Lee, Heming; Wang, Qianqian; Shan, Bin; Li, Xiaoyang; Gong, Yong; Zhao, Jing; Peng, Zhong
2016-11-01
A new Monte Carlo simulation model of laser transmission in smokescreen is promoted in this paper. In the traditional Monte Carlo simulation model, the radius of particles is set at the same value and the initial cosine value of photons direction is fixed also, which can only get the approximate result. The new model is achieved based on MATLAB and can simulate laser transmittance in smokescreen with different sizes of particles, and the output result of the model is close to the real scenarios. In order to alleviate the influence of the laser divergence while traveling in the air, we changed the initial direction cosine of photons on the basis of the traditional Monte Carlo model. The mixed radius particle smoke simulation results agree with the measured transmittance under the same experimental conditions with 5.42% error rate.
Monte-Carlo based prediction of radiochromic film response for hadrontherapy dosimetry
Energy Technology Data Exchange (ETDEWEB)
Frisson, T. [Universite de Lyon, F-69622 Lyon (France); CREATIS-LRMN, INSA, Batiment Blaise Pascal, 7 avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Centre Leon Berrard - 28 rue Laennec, F-69373 Lyon Cedex 08 (France)], E-mail: frisson@creatis.insa-lyon.fr; Zahra, N. [Universite de Lyon, F-69622 Lyon (France); IPNL - CNRS/IN2P3 UMR 5822, Universite Lyon 1, Batiment Paul Dirac, 4 rue Enrico Fermi, F-69622 Villeurbanne Cedex (France); Centre Leon Berrard - 28 rue Laennec, F-69373 Lyon Cedex 08 (France); Lautesse, P. [Universite de Lyon, F-69622 Lyon (France); IPNL - CNRS/IN2P3 UMR 5822, Universite Lyon 1, Batiment Paul Dirac, 4 rue Enrico Fermi, F-69622 Villeurbanne Cedex (France); Sarrut, D. [Universite de Lyon, F-69622 Lyon (France); CREATIS-LRMN, INSA, Batiment Blaise Pascal, 7 avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Centre Leon Berrard - 28 rue Laennec, F-69373 Lyon Cedex 08 (France)
2009-07-21
A model has been developed to calculate MD-55-V2 radiochromic film response to ion irradiation. This model is based on photon film response and film saturation by high local energy deposition computed by Monte-Carlo simulation. We have studied the response of the film to photon irradiation and we proposed a calculation method for hadron beams.
Monte-Carlo based prediction of radiochromic film response for hadrontherapy dosimetry
Frisson, T.; Zahra, N.; Lautesse, P.; Sarrut, D.
2009-07-01
A model has been developed to calculate MD-55-V2 radiochromic film response to ion irradiation. This model is based on photon film response and film saturation by high local energy deposition computed by Monte-Carlo simulation. We have studied the response of the film to photon irradiation and we proposed a calculation method for hadron beams.
Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation
Minasny, B.; Vrugt, J.A.; McBratney, A.B.
2011-01-01
This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior distributi
A lattice-based Monte Carlo evaluation of Canada Deuterium Uranium-6 safety parameters
Energy Technology Data Exchange (ETDEWEB)
Kim, Yong Hee; Hartanto, Donny; Kim, Woo Song [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon (Korea, Republic of)
2016-06-15
Important safety parameters such as the fuel temperature coefficient (FTC) and the power coefficient of reactivity (PCR) of the CANada Deuterium Uranium (CANDU-6) reactor have been evaluated using the Monte Carlo method. For accurate analysis of the parameters, the Doppler broadening rejection correction scheme was implemented in the MCNPX code to account for the thermal motion of the heavy uranium-238 nucleus in the neutron-U scattering reactions. In this work, a standard fuel lattice has been modeled and the fuel is depleted using MCNPX. The FTC value is evaluated for several burnup points including the mid-burnup representing a near-equilibrium core. The Doppler effect has been evaluated using several cross-section libraries such as ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1.1, and JENDL-4.0. The PCR value is also evaluated at mid-burnup conditions to characterize the safety features of an equilibrium CANDU-6 reactor. To improve the reliability of the Monte Carlo calculations, we considered a huge number of neutron histories in this work and the standard deviation of the k-infinity values is only 0.5-1 pcm.
Fung, Wing K; Yu, Kexin; Yang, Yingrui; Zhou, Ji-Yuan
2016-08-08
Monte Carlo evaluation of resampling-based tests is often conducted in statistical analysis. However, this procedure is generally computationally intensive. The pooling resampling-based method has been developed to reduce the computational burden but the validity of the method has not been studied before. In this article, we first investigate the asymptotic properties of the pooling resampling-based method and then propose a novel Monte Carlo evaluation procedure namely the n-times pooling resampling-based method. Theorems as well as simulations show that the proposed method can give smaller or comparable root mean squared errors and bias with much less computing time, thus can be strongly recommended especially for evaluating highly computationally intensive hypothesis testing procedures in genetic epidemiology.
A measurement-based generalized source model for Monte Carlo dose simulations of CT scans
Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun
2017-03-01
The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg–Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.
A Markov Chain Monte Carlo Based Method for System Identification
Energy Technology Data Exchange (ETDEWEB)
Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G
2002-10-22
This paper describes a novel methodology for the identification of mechanical systems and structures from vibration response measurements. It combines prior information, observational data and predictive finite element models to produce configurations and system parameter values that are most consistent with the available data and model. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. The resulting process enables the estimation of distributions of both individual parameters and system-wide states. Attractive features of this approach include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate; (2) function effectively when exposed to degraded conditions including: noisy data, incomplete data sets and model misspecification; (3) allow alternative estimates to be produced and compared, and (4) incrementally update initial estimates and analysis as more data becomes available. A series of test cases based on a simple fixed-free cantilever beam is presented. These results demonstrate that the algorithm is able to identify the system, based on the stiffness matrix, given applied force and resultant nodal displacements. Moreover, it effectively identifies locations on the beam where damage (represented by a change in elastic modulus) was specified.
Food composition and acid-base balance: alimentary alkali depletion and acid load in herbivores.
Kiwull-Schöne, Heidrun; Kiwull, Peter; Manz, Friedrich; Kalhoff, Hermann
2008-02-01
Alkali-enriched diets are recommended for humans to diminish the net acid load of their usual diet. In contrast, herbivores have to deal with a high dietary alkali impact on acid-base balance. Here we explore the role of nutritional alkali in experimentally induced chronic metabolic acidosis. Data were collected from healthy male adult rabbits kept in metabolism cages to obtain 24-h urine and arterial blood samples. Randomized groups consumed rabbit diets ad libitum, providing sufficient energy but variable alkali load. One subgroup (n = 10) received high-alkali food and approximately 15 mEq/kg ammonium chloride (NH4Cl) with its drinking water for 5 d. Another group (n = 14) was fed low-alkali food for 5 d and given approximately 4 mEq/kg NH4Cl daily for the last 2 d. The wide range of alimentary acid-base load was significantly reflected by renal base excretion, but normal acid-base conditions were maintained in the arterial blood. In rabbits fed a high-alkali diet, the excreted alkaline urine (pH(u) > 8.0) typically contained a large amount of precipitated carbonate, whereas in rabbits fed a low-alkali diet, both pH(u) and precipitate decreased considerably. During high-alkali feeding, application of NH4Cl likewise decreased pH(u), but arterial pH was still maintained with no indication of metabolic acidosis. During low-alkali feeding, a comparably small amount of added NH4Cl further lowered pH(u) and was accompanied by a significant systemic metabolic acidosis. We conclude that exhausted renal base-saving function by dietary alkali depletion is a prerequisite for growing susceptibility to NH4Cl-induced chronic metabolic acidosis in the herbivore rabbit.
Effects of CT based Voxel Phantoms on Dose Distribution Calculated with Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
Chen Chaobin; Huang Qunying; Wu Yican
2005-01-01
A few CT-based voxel phantoms were produced to investigate the sensitivity of Monte Carlo simulations of X-ray beam and electron beam to the proportions of elements and the mass densities of the materials used to express the patient's anatomical structure. The human body can be well outlined by air, lung, adipose, muscle, soft bone and hard bone to calculate the dose distribution with Monte Carlo method. The effects of the calibration curves established by using various CT scanners are not clinically significant based on our investigation. The deviation from the values of cumulative dose volume histogram derived from CT-based voxel phantoms is less than 1% for the given target.
In silico prediction of the β-cyclodextrin complexation based on Monte Carlo method.
Veselinović, Aleksandar M; Veselinović, Jovana B; Toropov, Andrey A; Toropova, Alla P; Nikolić, Goran M
2015-11-10
In this study QSPR models were developed to predict the complexation of structurally diverse compounds with β-cyclodextrin based on SMILES notation optimal descriptors using Monte Carlo method. The predictive potential of the applied approach was tested with three random splits into the sub-training, calibration, test and validation sets and with different statistical methods. Obtained results demonstrate that Monte Carlo method based modeling is a very promising computational method in the QSPR studies for predicting the complexation of structurally diverse compounds with β-cyclodextrin. The SMILES attributes (structural features both local and global), defined as molecular fragments, which are promoters of the increase/decrease of molecular binding constants were identified. These structural features were correlated to the complexation process and their identification helped to improve the understanding for the complexation mechanisms of the host molecules.
Simulation model based on Monte Carlo method for traffic assignment in local area road network
Institute of Scientific and Technical Information of China (English)
Yuchuan DU; Yuanjing GENG; Lijun SUN
2009-01-01
For a local area road network, the available traffic data of traveling are the flow volumes in the key intersections, not the complete OD matrix. Considering the circumstance characteristic and the data availability of a local area road network, a new model for traffic assignment based on Monte Carlo simulation of intersection turning movement is provided in this paper. For good stability in temporal sequence, turning ratio is adopted as the important parameter of this model. The formulation for local area road network assignment problems is proposed on the assumption of random turning behavior. The traffic assignment model based on the Monte Carlo method has been used in traffic analysis for an actual urban road network. The results comparing surveying traffic flow data and determining flow data by the previous model verify the applicability and validity of the proposed methodology.
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes
Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...
Hu, Xingzhi; Chen, Xiaoqian; Parks, Geoffrey T.; Yao, Wen
2016-10-01
Ever-increasing demands of uncertainty-based design, analysis, and optimization in aerospace vehicles motivate the development of Monte Carlo methods with wide adaptability and high accuracy. This paper presents a comprehensive review of typical improved Monte Carlo methods and summarizes their characteristics to aid the uncertainty-based multidisciplinary design optimization (UMDO). Among them, Bayesian inference aims to tackle the problems with the availability of prior information like measurement data. Importance sampling (IS) settles the inconvenient sampling and difficult propagation through the incorporation of an intermediate importance distribution or sequential distributions. Optimized Latin hypercube sampling (OLHS) is a stratified sampling approach to achieving better space-filling and non-collapsing characteristics. Meta-modeling approximation based on Monte Carlo saves the computational cost by using cheap meta-models for the output response. All the reviewed methods are illustrated by corresponding aerospace applications, which are compared to show their techniques and usefulness in UMDO, thus providing a beneficial reference for future theoretical and applied research.
Pair correlations in iron-based superconductors: Quantum Monte Carlo study
Energy Technology Data Exchange (ETDEWEB)
Kashurnikov, V.A.; Krasavin, A.V., E-mail: avkrasavin@gmail.com
2014-08-01
The new generalized quantum continuous time world line Monte Carlo algorithm was developed to calculate pair correlation functions for two-dimensional FeAs-clusters modeling of iron-based superconductors using a two-orbital model. The data obtained for clusters with sizes up to 10×10 FeAs-cells favor the possibility of an effective charge carrier's attraction that is corresponding the A{sub 1g}-symmetry, at some parameters of interaction. The analysis of pair correlations depending on the cluster size, temperature, interaction, and the type of symmetry of the order parameter is carried out. - Highlights: • New generalized quantum continuous time world line Monte Carlo algorithm is developed. • Pair correlation functions for two-dimensional FeAs-clusters are calculated. • Parameters of two-orbital model corresponding to attraction of carriers are defined.
Electron density of states of Fe-based superconductors: Quantum trajectory Monte Carlo method
Kashurnikov, V. A.; Krasavin, A. V.; Zhumagulov, Ya. V.
2016-03-01
The spectral and total electron densities of states in two-dimensional FeAs clusters, which simulate iron-based superconductors, have been calculated using the generalized quantum Monte Carlo algorithm within the full two-orbital model. Spectra have been reconstructed by solving the integral equation relating the Matsubara Green's function and spectral density by the method combining the gradient descent and Monte Carlo algorithms. The calculations have been performed for clusters with dimensions up to 10 × 10 FeAs cells. The profiles of the Fermi surface for the entire Brillouin zone have been presented in the quasiparticle approximation. Data for the total density of states near the Fermi level have been obtained. The effect of the interaction parameter, size of the cluster, and temperature on the spectrum of excitations has been studied.
Directory of Open Access Journals (Sweden)
He Deyu
2016-09-01
Full Text Available Assessing the risks of steering system faults in underwater vehicles is a human-machine-environment (HME systematic safety field that studies faults in the steering system itself, the driver’s human reliability (HR and various environmental conditions. This paper proposed a fault risk assessment method for an underwater vehicle steering system based on virtual prototyping and Monte Carlo simulation. A virtual steering system prototype was established and validated to rectify a lack of historic fault data. Fault injection and simulation were conducted to acquire fault simulation data. A Monte Carlo simulation was adopted that integrated randomness due to the human operator and environment. Randomness and uncertainty of the human, machine and environment were integrated in the method to obtain a probabilistic risk indicator. To verify the proposed method, a case of stuck rudder fault (SRF risk assessment was studied. This method may provide a novel solution for fault risk assessment of a vehicle or other general HME system.
ERSN-OpenMC, a Java-based GUI for OpenMC Monte Carlo code
Directory of Open Access Journals (Sweden)
Jaafar EL Bakkali
2016-07-01
Full Text Available OpenMC is a new Monte Carlo transport particle simulation code focused on solving two types of neutronic problems mainly the k-eigenvalue criticality fission source problems and external fixed fission source problems. OpenMC does not have any Graphical User Interface and the creation of one is provided by our java-based application named ERSN-OpenMC. The main feature of this application is to provide to the users an easy-to-use and flexible graphical interface to build better and faster simulations, with less effort and great reliability. Additionally, this graphical tool was developed with several features, as the ability to automate the building process of OpenMC code and related libraries as well as the users are given the freedom to customize their installation of this Monte Carlo code. A full description of the ERSN-OpenMC application is presented in this paper.
GPU-accelerated Monte Carlo simulation of particle coagulation based on the inverse method
Wei, J.; Kruis, F. E.
2013-09-01
Simulating particle coagulation using Monte Carlo methods is in general a challenging computational task due to its numerical complexity and the computing cost. Currently, the lowest computing costs are obtained when applying a graphic processing unit (GPU) originally developed for speeding up graphic processing in the consumer market. In this article we present an implementation of accelerating a Monte Carlo method based on the Inverse scheme for simulating particle coagulation on the GPU. The abundant data parallelism embedded within the Monte Carlo method is explained as it will allow an efficient parallelization of the MC code on the GPU. Furthermore, the computation accuracy of the MC on GPU was validated with a benchmark, a CPU-based discrete-sectional method. To evaluate the performance gains by using the GPU, the computing time on the GPU against its sequential counterpart on the CPU were compared. The measured speedups show that the GPU can accelerate the execution of the MC code by a factor 10-100, depending on the chosen particle number of simulation particles. The algorithm shows a linear dependence of computing time with the number of simulation particles, which is a remarkable result in view of the n2 dependence of the coagulation.
Institute of Scientific and Technical Information of China (English)
SHAO Yong-Bo; ZHAO Ling-Juan; YU Hong-Yan; QIU Ji-Fang; QIU Ying-Ping; PAN Jiao-Qing; WANG Bao-Jun; ZHU Hong-Liang; WANG Wei
2011-01-01
A novel dual-depletion-region electroabsorption modulator (DDR-EAM) based on InP at 1550nm is fabricated.The measured capacitance and extinction ratio of the DDR-EAM reveal that the dual depletion region structure can reduce the device capacitance significantly without any degradation of extinction ratio.Moreover,the bandwidth of the DDR-EAM predicted by using an equivalent circuit model is larger than twice the bandwidth of the conventional lumped-electrode EAM (L-EAM).The electroabsorption modulator (EAM) is highly desirable as an external electro-optical modulator due to its high speed,low cost and capability of integration with other optical component such as DFB lasers,DBR lasers or semiconductor optical amplifiers.[1-4]So far,EAMs are typically fabricated by using lumped electrodes[1-4] and travelling-wave electrodes.[5-15]%A novel dual-depletion-region electroabsorption modulator (DDR-EAM) based on InP at 1550nm is fabricated. The measured capacitance and extinction ratio of the DDR-EAM reveal that the dual depletion region structure can reduce the device capacitance significantly without any degradation of extinction ratio. Moreover, the bandwidth of the DDR-EAM predicted by using an equivalent circuit model is larger than twice the bandwidth of the conventional lumped-electrode EAM (L-EAM).
Monte Carlo Methods in Materials Science Based on FLUKA and ROOT
Pinsky, Lawrence; Wilson, Thomas; Empl, Anton; Andersen, Victor
2003-01-01
A comprehensive understanding of mitigation measures for space radiation protection necessarily involves the relevant fields of nuclear physics and particle transport modeling. One method of modeling the interaction of radiation traversing matter is Monte Carlo analysis, a subject that has been evolving since the very advent of nuclear reactors and particle accelerators in experimental physics. Countermeasures for radiation protection from neutrons near nuclear reactors, for example, were an early application and Monte Carlo methods were quickly adapted to this general field of investigation. The project discussed here is concerned with taking the latest tools and technology in Monte Carlo analysis and adapting them to space applications such as radiation shielding design for spacecraft, as well as investigating how next-generation Monte Carlos can complement the existing analytical methods currently used by NASA. We have chosen to employ the Monte Carlo program known as FLUKA (A legacy acronym based on the German for FLUctuating KAscade) used to simulate all of the particle transport, and the CERN developed graphical-interface object-oriented analysis software called ROOT. One aspect of space radiation analysis for which the Monte Carlo s are particularly suited is the study of secondary radiation produced as albedoes in the vicinity of the structural geometry involved. This broad goal of simulating space radiation transport through the relevant materials employing the FLUKA code necessarily requires the addition of the capability to simulate all heavy-ion interactions from 10 MeV/A up to the highest conceivable energies. For all energies above 3 GeV/A the Dual Parton Model (DPM) is currently used, although the possible improvement of the DPMJET event generator for energies 3-30 GeV/A is being considered. One of the major tasks still facing us is the provision for heavy ion interactions below 3 GeV/A. The ROOT interface is being developed in conjunction with the
Comparing analytical and Monte Carlo optical diffusion models in phosphor-based X-ray detectors
Kalyvas, N.; Liaparinos, P.
2014-03-01
Luminescent materials are employed as radiation to light converters in detectors of medical imaging systems, often referred to as phosphor screens. Several processes affect the light transfer properties of phosphors. Amongst the most important is the interaction of light. Light attenuation (absorption and scattering) can be described either through "diffusion" theory in theoretical models or "quantum" theory in Monte Carlo methods. Although analytical methods, based on photon diffusion equations, have been preferentially employed to investigate optical diffusion in the past, Monte Carlo simulation models can overcome several of the analytical modelling assumptions. The present study aimed to compare both methodologies and investigate the dependence of the analytical model optical parameters as a function of particle size. It was found that the optical photon attenuation coefficients calculated by analytical modeling are decreased with respect to the particle size (in the region 1- 12 μm). In addition, for particles sizes smaller than 6μm there is no simultaneous agreement between the theoretical modulation transfer function and light escape values with respect to the Monte Carlo data.
Energy Technology Data Exchange (ETDEWEB)
Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi
1996-03-01
The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).
Thermal transport in nanocrystalline Si and SiGe by ab initio based Monte Carlo simulation.
Yang, Lina; Minnich, Austin J
2017-03-14
Nanocrystalline thermoelectric materials based on Si have long been of interest because Si is earth-abundant, inexpensive, and non-toxic. However, a poor understanding of phonon grain boundary scattering and its effect on thermal conductivity has impeded efforts to improve the thermoelectric figure of merit. Here, we report an ab-initio based computational study of thermal transport in nanocrystalline Si-based materials using a variance-reduced Monte Carlo method with the full phonon dispersion and intrinsic lifetimes from first-principles as input. By fitting the transmission profile of grain boundaries, we obtain excellent agreement with experimental thermal conductivity of nanocrystalline Si [Wang et al. Nano Letters 11, 2206 (2011)]. Based on these calculations, we examine phonon transport in nanocrystalline SiGe alloys with ab-initio electron-phonon scattering rates. Our calculations show that low energy phonons still transport substantial amounts of heat in these materials, despite scattering by electron-phonon interactions, due to the high transmission of phonons at grain boundaries, and thus improvements in ZT are still possible by disrupting these modes. This work demonstrates the important insights into phonon transport that can be obtained using ab-initio based Monte Carlo simulations in complex nanostructured materials.
Lin, N.-H.; Saxena, V. K.
1992-01-01
The physical characteristics of the Antarctic stratospheric aerosol are investigated via a comprehensive analysis of the SAGE II data during the most severe ozone depletion episode of October 1987. The aerosol size distribution is found to be bimodal in several instances using the randomized minimization search technique, which suggests that the distribution of a single mode may be used to fit the data in the retrieved size range only at the expense of resolution for the larger particles. On average, in the region below 18 km, a wavelike perturbation with the upstream tilting for the parameters of mass loading, total number, and surface area concentration is found to be located just above the region of the most severe ozone depletion.
Adjoint-based uncertainty quantification and sensitivity analysis for reactor depletion calculations
Stripling, Hayes Franklin
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
GFS algorithm based on batch Monte Carlo trials for solving global optimization problems
Popkov, Yuri S.; Darkhovskiy, Boris S.; Popkov, Alexey Y.
2016-10-01
A new method for global optimization of Hölder goal functions under compact sets given by inequalities is proposed. All functions are defined only algorithmically. The method is based on performing simple Monte Carlo trials and constructing the sequences of records and the sequence of their decrements. An estimating procedure of Hölder constants is proposed. Probability estimation of exact global minimum neighborhood using Hölder constants estimates is presented. Results on some analytical and algorithmic test problems illustrate the method's performance.
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX
Keyvan Jabbari; Jan Seuntjens
2014-01-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft t...
Pair correlation functions of FeAs-based superconductors: Quantum Monte Carlo study
Kashurnikov, V. A.; Krasavin, A. V.
2015-01-01
The new generalized quantum continuous time world line Monte Carlo algorithm was developed to calculate pair correlation functions for two-dimensional FeAs-clusters modeling of iron-based superconductors within the framework of the two-orbital model. The analysis of pair correlations depending on the cluster size, temperature, interaction, and the type of symmetry of the order parameter is carried out. The data obtained for clusters with sizes up to 1 0x1 0 FeAs-cells favor the possibility of an effective charge carrier's attraction that is corresponding the A1g-symmetry, at some parameters of interaction.
Energy Technology Data Exchange (ETDEWEB)
Zhu Feng [State Key Laboratory for Physical Chemistry of Solid Surfaces and Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, Fujian 361005 (China); Yan Jiawei, E-mail: jwyan@xmu.edu.cn [State Key Laboratory for Physical Chemistry of Solid Surfaces and Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, Fujian 361005 (China); Lu Miao [Pen-Tung Sah Micro-Nano Technology Research Center, Xiamen University, Xiamen, Fujian 361005 (China); Zhou Yongliang; Yang Yang; Mao Bingwei [State Key Laboratory for Physical Chemistry of Solid Surfaces and Department of Chemistry, College of Chemistry and Chemical Engineering, Xiamen University, Xiamen, Fujian 361005 (China)
2011-10-01
Highlights: > A novel strategy based on a combination of interferent depleting and redox cycling is proposed for the plane-recessed microdisk array electrodes. > The strategy break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction. > The electrodes enhance the current signal by redox cycling. > The electrodes can work regardless of the reversibility of interfering species. - Abstract: The fabrication, characterization and application of the plane-recessed microdisk array electrodes for selective detection are demonstrated. The electrodes, fabricated by lithographic microfabrication technology, are composed of a planar film electrode and a 32 x 32 recessed microdisk array electrode. Different from commonly used redox cycling operating mode for array configurations such as interdigitated array electrodes, a novel strategy based on a combination of interferent depleting and redox cycling is proposed for the electrodes with an appropriate configuration. The planar film electrode (the plane electrode) is used to deplete the interferent in the diffusion layer. The recessed microdisk array electrode (the microdisk array), locating within the diffusion layer of the plane electrode, works for detecting the target analyte in the interferent-depleted diffusion layer. In addition, the microdisk array overcomes the disadvantage of low current signal for a single microelectrode. Moreover, the current signal of the target analyte that undergoes reversible electron transfer can be enhanced due to the redox cycling between the plane electrode and the microdisk array. Based on the above working principle, the plane-recessed microdisk array electrodes break up the restriction of selectively detecting a species that exhibits reversible reaction in a mixture with one that exhibits an irreversible reaction, which is a limitation of single redox cycling operating mode. The advantages of the
PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation
Energy Technology Data Exchange (ETDEWEB)
Espana, S; Herraiz, J L; Vicente, E; Udias, J M [Grupo de Fisica Nuclear, Departmento de Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid, Madrid (Spain); Vaquero, J J; Desco, M [Unidad de Medicina y CirugIa Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2009-03-21
Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.
Yu, Hui; Pantouvaki, Marianna; Van Campenhout, Joris; Korn, Dietmar; Komorowska, Katarzyna; Dumon, Pieter; Li, Yanlu; Verheyen, Peter; Absil, Philippe; Alloatti, Luca; Hillerkuss, David; Leuthold, Juerg; Baets, Roel; Bogaerts, Wim
2012-06-04
Carrier-depletion based silicon modulators with lateral and interdigitated PN junctions are compared systematically on the same fabrication platform. The interdigitated diode is shown to outperform the lateral diode in achieving a low VπLπ of 0.62 V∙cm with comparable propagation loss at the expense of a higher depletion capacitance. The low VπLπ of the interdigitated PN junction is employed to demonstrate 10 Gbit/s modulation with 7.5 dB extinction ration from a 500 µm long device whose static insertion loss is 2.8 dB. In addition, up to 40 Gbit/s modulation is demonstrated for a 3 mm long device comprising a lateral diode and a co-designed traveling wave electrode.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
Townson, Reid; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-01-01
A novel phase-space source implementation has been designed for GPU-based Monte Carlo dose calculation engines. Due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel strategy to pre-process patient-independent phase-spaces and bin particles by type, energy and position. Position bins l...
A Monte-Carlo based model of the AX-PET demonstrator and its experimental validation.
Solevi, P; Oliver, J F; Gillam, J E; Bolle, E; Casella, C; Chesi, E; De Leo, R; Dissertori, G; Fanti, V; Heller, M; Lai, M; Lustermann, W; Nappi, E; Pauss, F; Rudge, A; Ruotsalainen, U; Schinzel, D; Schneider, T; Séguinot, J; Stapnes, S; Weilhammer, P; Tuna, U; Joram, C; Rafecas, M
2013-08-21
AX-PET is a novel PET detector based on axially oriented crystals and orthogonal wavelength shifter (WLS) strips, both individually read out by silicon photo-multipliers. Its design decouples sensitivity and spatial resolution, by reducing the parallax error due to the layered arrangement of the crystals. Additionally the granularity of AX-PET enhances the capability to track photons within the detector yielding a large fraction of inter-crystal scatter events. These events, if properly processed, can be included in the reconstruction stage further increasing the sensitivity. Its unique features require dedicated Monte-Carlo simulations, enabling the development of the device, interpreting data and allowing the development of reconstruction codes. At the same time the non-conventional design of AX-PET poses several challenges to the simulation and modeling tasks, mostly related to the light transport and distribution within the crystals and WLS strips, as well as the electronics readout. In this work we present a hybrid simulation tool based on an analytical model and a Monte-Carlo based description of the AX-PET demonstrator. It was extensively validated against experimental data, providing excellent agreement.
X-ray imaging plate performance investigation based on a Monte Carlo simulation tool
Energy Technology Data Exchange (ETDEWEB)
Yao, M., E-mail: philippe.duvauchelle@insa-lyon.fr [Laboratoire Vibration Acoustique (LVA), INSA de Lyon, 25 Avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Duvauchelle, Ph.; Kaftandjian, V. [Laboratoire Vibration Acoustique (LVA), INSA de Lyon, 25 Avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Peterzol-Parmentier, A. [AREVA NDE-Solutions, 4 Rue Thomas Dumorey, 71100 Chalon-sur-Saône (France); Schumm, A. [EDF R& D SINETICS, 1 Avenue du Général de Gaulle, 92141 Clamart Cedex (France)
2015-01-01
Computed radiography (CR) based on imaging plate (IP) technology represents a potential replacement technique for traditional film-based industrial radiography. For investigating the IP performance especially at high energies, a Monte Carlo simulation tool based on PENELOPE has been developed. This tool tracks separately direct and secondary radiations, and monitors the behavior of different particles. The simulation output provides 3D distribution of deposited energy in IP and evaluation of radiation spectrum propagation allowing us to visualize the behavior of different particles and the influence of different elements. A detailed analysis, on the spectral and spatial responses of IP at different energies up to MeV, has been performed. - Highlights: • A Monte Carlo tool for imaging plate (IP) performance investigation is presented. • The tool outputs 3D maps of energy deposition in IP due to different signals. • The tool also provides the transmitted spectra along the radiation propagation. • An industrial imaging case is simulated with the presented tool. • A detailed analysis, on the spectral and spatial responses of IP, is presented.
Energy Technology Data Exchange (ETDEWEB)
Ureba, A.; Pereira-Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Salguero, F. J.; Leal, A.
2013-07-01
The use of Monte Carlo (MC) has shown an improvement in the accuracy of the calculation of the dose compared to other analytics algorithms installed on the systems of business planning, especially in the case of non-standard situations typical of complex techniques such as IMRT and VMAT. Our treatment planning system called CARMEN, is based on the complete simulation, both the beam transport in the head of the accelerator and the patient, and simulation designed for efficient operation in terms of the accuracy of the estimate and the required computation times. (Author)
Comment on "A study on tetrahedron-based inhomogeneous Monte-Carlo optical simulation".
Fang, Qianqian
2011-04-19
The Monte Carlo (MC) method is a popular approach to modeling photon propagation inside general turbid media, such as human tissue. Progress had been made in the past year with the independent proposals of two mesh-based Monte Carlo methods employing ray-tracing techniques. Both methods have shown improvements in accuracy and efficiency in modeling complex domains. A recent paper by Shen and Wang [Biomed. Opt. Express 2, 44 (2011)] reported preliminary results towards the cross-validation of the two mesh-based MC algorithms and software implementations, showing a 3-6 fold speed difference between the two software packages. In this comment, we share our views on unbiased software comparisons and discuss additional issues such as the use of pre-computed data, interpolation strategies, impact of compiler settings, use of Russian roulette, memory cost and potential pitfalls in measuring algorithm performance. Despite key differences between the two algorithms in handling of non-tetrahedral meshes, we found that they share similar structure and performance for tetrahedral meshes. A significant fraction of the observed speed differences in the mentioned article was the result of inconsistent use of compilers and libraries.
Monte Carlo simulation as a tool to predict blasting fragmentation based on the Kuz Ram model
Morin, Mario A.; Ficarazzo, Francesco
2006-04-01
Rock fragmentation is considered the most important aspect of production blasting because of its direct effects on the costs of drilling and blasting and on the economics of the subsequent operations of loading, hauling and crushing. Over the past three decades, significant progress has been made in the development of new technologies for blasting applications. These technologies include increasingly sophisticated computer models for blast design and blast performance prediction. Rock fragmentation depends on many variables such as rock mass properties, site geology, in situ fracturing and blasting parameters and as such has no complete theoretical solution for its prediction. However, empirical models for the estimation of size distribution of rock fragments have been developed. In this study, a blast fragmentation Monte Carlo-based simulator, based on the Kuz-Ram fragmentation model, has been developed to predict the entire fragmentation size distribution, taking into account intact and joints rock properties, the type and properties of explosives and the drilling pattern. Results produced by this simulator were quite favorable when compared with real fragmentation data obtained from a blast quarry. It is anticipated that the use of Monte Carlo simulation will increase our understanding of the effects of rock mass and explosive properties on the rock fragmentation by blasting, as well as increase our confidence in these empirical models. This understanding will translate into improvements in blasting operations, its corresponding costs and the overall economics of open pit mines and rock quarries.
GPU-based high performance Monte Carlo simulation in neutron transport
Energy Technology Data Exchange (ETDEWEB)
Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br
2009-07-01
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)
IMPROVED ALGORITHM FOR ROAD REGION SEGMENTATION BASED ON SEQUENTIAL MONTE-CARLO ESTIMATION
Directory of Open Access Journals (Sweden)
Zdenek Prochazka
2014-12-01
Full Text Available In recent years, many researchers and car makers put a lot of intensive effort into development of autonomous driving systems. Since visual information is the main modality used by human driver, a camera mounted on moving platform is very important kind of sensor, and various computer vision algorithms to handle vehicle surrounding situation are under intensive research. Our final goal is to develop a vision based lane detection system with ability to handle various types of road shapes, working on both structured and unstructured roads, ideally under presence of shadows. This paper presents a modified road region segmentation algorithm based on sequential Monte-Carlo estimation. Detailed description of the algorithm is given, and evaluation results show that the proposed algorithm outperforms the segmentation algorithm developed as a part of our previous work, as well as an conventional algorithm based on colour histogram.
Huang, Guanghui; Wan, Jianping; Chen, Hui
2013-02-01
Nonlinear stochastic differential equation models with unobservable state variables are now widely used in analysis of PK/PD data. Unobservable state variables are usually estimated with extended Kalman filter (EKF), and the unknown pharmacokinetic parameters are usually estimated by maximum likelihood estimator. However, EKF is inadequate for nonlinear PK/PD models, and MLE is known to be biased downwards. A density-based Monte Carlo filter (DMF) is proposed to estimate the unobservable state variables, and a simulation-based M estimator is proposed to estimate the unknown parameters in this paper, where a genetic algorithm is designed to search the optimal values of pharmacokinetic parameters. The performances of EKF and DMF are compared through simulations for discrete time and continuous time systems respectively, and it is found that the results based on DMF are more accurate than those given by EKF with respect to mean absolute error.
SU-E-T-761: TOMOMC, A Monte Carlo-Based Planning VerificationTool for Helical Tomotherapy
Energy Technology Data Exchange (ETDEWEB)
Chibani, O; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)
2015-06-15
Purpose: Present a new Monte Carlo code (TOMOMC) to calculate 3D dose distributions for patients undergoing helical tomotherapy treatments. TOMOMC performs CT-based dose calculations using the actual dynamic variables of the machine (couch motion, gantry rotation, and MLC sequences). Methods: TOMOMC is based on the GEPTS (Gama Electron and Positron Transport System) general-purpose Monte Carlo system (Chibani and Li, Med. Phys. 29, 2002, 835). First, beam models for the Hi-Art Tomotherpy machine were developed for the different beam widths (1, 2.5 and 5 cm). The beam model accounts for the exact geometry and composition of the different components of the linac head (target, primary collimator, jaws and MLCs). The beams models were benchmarked by comparing calculated Pdds and lateral/transversal dose profiles with ionization chamber measurements in water. See figures 1–3. The MLC model was tuned in such a way that tongue and groove effect, inter-leaf and intra-leaf transmission are modeled correctly. See figure 4. Results: By simulating the exact patient anatomy and the actual treatment delivery conditions (couch motion, gantry rotation and MLC sinogram), TOMOMC is able to calculate the 3D patient dose distribution which is in principal more accurate than the one from the treatment planning system (TPS) since it relies on the Monte Carlo method (gold standard). Dose volume parameters based on the Monte Carlo dose distribution can also be compared to those produced by the TPS. Attached figures show isodose lines for a H&N patient calculated by TOMOMC (transverse and sagittal views). Analysis of differences between TOMOMC and TPS is an ongoing work for different anatomic sites. Conclusion: A new Monte Carlo code (TOMOMC) was developed for Tomotherapy patient-specific QA. The next step in this project is implementing GPU computing to speed up Monte Carlo simulation and make Monte Carlo-based treatment verification a practical solution.
DYNAMIC PARAMETERS ESTIMATION OF INTERFEROMETRIC SIGNALS BASED ON SEQUENTIAL MONTE CARLO METHOD
Directory of Open Access Journals (Sweden)
M. A. Volynsky
2014-05-01
Full Text Available The paper deals with sequential Monte Carlo method applied to problem of interferometric signals parameters estimation. The method is based on the statistical approximation of the posterior probability density distribution of parameters. Detailed description of the algorithm is given. The possibility of using the residual minimum between prediction and observation as a criterion for the selection of multitude elements generated at each algorithm step is shown. Analysis of input parameters influence on performance of the algorithm has been conducted. It was found that the standard deviation of the amplitude estimation error for typical signals is about 10% of the maximum amplitude value. The phase estimation error was shown to have a normal distribution. Analysis of the algorithm characteristics depending on input parameters is done. In particular, the influence analysis for a number of selected vectors of parameters on evaluation results is carried out. On the basis of simulation results for the considered class of signals, it is recommended to select 30% of the generated vectors number. The increase of the generated vectors number over 150 does not give significant improvement of the obtained estimates quality. The sequential Monte Carlo method is recommended for usage in dynamic processing of interferometric signals for the cases when high immunity is required to non-linear changes of signal parameters and influence of random noise.
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX.
Jabbari, Keyvan; Seuntjens, Jan
2014-07-01
An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue). This code can transport protons in wide range of energies (up to 200 MeV for proton). The validity of the fast Monte Carlo (MC) code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10%) near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10(6) particles in an Intel Core 2 Duo 2.66 GHZ desktop computer.
A fast Monte Carlo code for proton transport in radiation therapy based on MCNPX
Directory of Open Access Journals (Sweden)
Keyvan Jabbari
2014-01-01
Full Text Available An important requirement for proton therapy is a software for dose calculation. Monte Carlo is the most accurate method for dose calculation, but it is very slow. In this work, a method is developed to improve the speed of dose calculation. The method is based on pre-generated tracks for particle transport. The MCNPX code has been used for generation of tracks. A set of data including the track of the particle was produced in each particular material (water, air, lung tissue, bone, and soft tissue. This code can transport protons in wide range of energies (up to 200 MeV for proton. The validity of the fast Monte Carlo (MC code is evaluated with data MCNPX as a reference code. While analytical pencil beam algorithm transport shows great errors (up to 10% near small high density heterogeneities, there was less than 2% deviation of MCNPX results in our dose calculation and isodose distribution. In terms of speed, the code runs 200 times faster than MCNPX. In the Fast MC code which is developed in this work, it takes the system less than 2 minutes to calculate dose for 10 6 particles in an Intel Core 2 Duo 2.66 GHZ desktop computer.
Fission yield calculation using toy model based on Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Jubaidah, E-mail: jubaidah@student.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia); Physics Department, Faculty of Mathematics and Natural Science – State University of Medan. Jl. Willem Iskandar Pasar V Medan Estate – North Sumatera, Indonesia 20221 (Indonesia); Kurniadi, Rizal, E-mail: rijalk@fi.itb.ac.id [Nuclear Physics and Biophysics Division, Department of Physics, Bandung Institute of Technology. Jl. Ganesa No. 10 Bandung – West Java, Indonesia 40132 (Indonesia)
2015-09-30
Toy model is a new approximation in predicting fission yield distribution. Toy model assumes nucleus as an elastic toy consist of marbles. The number of marbles represents the number of nucleons, A. This toy nucleus is able to imitate the real nucleus properties. In this research, the toy nucleons are only influenced by central force. A heavy toy nucleus induced by a toy nucleon will be split into two fragments. These two fission fragments are called fission yield. In this research, energy entanglement is neglected. Fission process in toy model is illustrated by two Gaussian curves intersecting each other. There are five Gaussian parameters used in this research. They are scission point of the two curves (R{sub c}), mean of left curve (μ{sub L}) and mean of right curve (μ{sub R}), deviation of left curve (σ{sub L}) and deviation of right curve (σ{sub R}). The fission yields distribution is analyses based on Monte Carlo simulation. The result shows that variation in σ or µ can significanly move the average frequency of asymmetry fission yields. This also varies the range of fission yields distribution probability. In addition, variation in iteration coefficient only change the frequency of fission yields. Monte Carlo simulation for fission yield calculation using toy model successfully indicates the same tendency with experiment results, where average of light fission yield is in the range of 90
A method based on Monte Carlo simulation for the determination of the G(E) function.
Chen, Wei; Feng, Tiancheng; Liu, Jun; Su, Chuanying; Tian, Yanjie
2015-02-01
The G(E) function method is a spectrometric method for the exposure dose estimation; this paper describes a method based on Monte Carlo method to determine the G(E) function of a 4″ × 4″ × 16″ NaI(Tl) detector. Simulated spectrums of various monoenergetic gamma rays in the region of 40 -3200 keV and the corresponding deposited energy in an air ball in the energy region of full-energy peak were obtained using Monte Carlo N-particle Transport Code. Absorbed dose rate in air was obtained according to the deposited energy and divided by counts of corresponding full-energy peak to get the G(E) function value at energy E in spectra. Curve-fitting software 1st0pt was used to determine coefficients of the G(E) function. Experimental results show that the calculated dose rates using the G(E) function determined by the authors' method are accordant well with those values obtained by ionisation chamber, with a maximum deviation of 6.31 %.
GMC: a GPU implementation of a Monte Carlo dose calculation based on Geant4.
Jahnke, Lennart; Fleckenstein, Jens; Wenz, Frederik; Hesser, Jürgen
2012-03-07
We present a GPU implementation called GMC (GPU Monte Carlo) of the low energy (CUDA programming interface. The classes for electron and photon interactions as well as a new parallel particle transport engine were implemented. The way a particle is processed is not in a history by history manner but rather by an interaction by interaction method. Every history is divided into steps that are then calculated in parallel by different kernels. The geometry package is currently limited to voxelized geometries. A modified parallel Mersenne twister was used to generate random numbers and a random number repetition method on the GPU was introduced. All phantom results showed a very good agreement between GPU and CPU simulation with gamma indices of >97.5% for a 2%/2 mm gamma criteria. The mean acceleration on one GTX 580 for all cases compared to Geant4 on one CPU core was 4860. The mean number of histories per millisecond on the GPU for all cases was 658 leading to a total simulation time for one intensity-modulated radiation therapy dose distribution of 349 s. In conclusion, Geant4-based Monte Carlo dose calculations were significantly accelerated on the GPU.
A Strategy for Finding the Optimal Scale of Plant Core Collection Based on Monte Carlo Simulation
Directory of Open Access Journals (Sweden)
Jiancheng Wang
2014-01-01
Full Text Available Core collection is an ideal resource for genome-wide association studies (GWAS. A subcore collection is a subset of a core collection. A strategy was proposed for finding the optimal sampling percentage on plant subcore collection based on Monte Carlo simulation. A cotton germplasm group of 168 accessions with 20 quantitative traits was used to construct subcore collections. Mixed linear model approach was used to eliminate environment effect and GE (genotype × environment effect. Least distance stepwise sampling (LDSS method combining 6 commonly used genetic distances and unweighted pair-group average (UPGMA cluster method was adopted to construct subcore collections. Homogeneous population assessing method was adopted to assess the validity of 7 evaluating parameters of subcore collection. Monte Carlo simulation was conducted on the sampling percentage, the number of traits, and the evaluating parameters. A new method for “distilling free-form natural laws from experimental data” was adopted to find the best formula to determine the optimal sampling percentages. The results showed that coincidence rate of range (CR was the most valid evaluating parameter and was suitable to serve as a threshold to find the optimal sampling percentage. The principal component analysis showed that subcore collections constructed by the optimal sampling percentages calculated by present strategy were well representative.
CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC
Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin
2014-06-01
Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.
Simulation of Cone Beam CT System Based on Monte Carlo Method
Wang, Yu; Cao, Ruifen; Hu, Liqin; Li, Bingbing
2014-01-01
Adaptive Radiation Therapy (ART) was developed based on Image-guided Radiation Therapy (IGRT) and it is the trend of photon radiation therapy. To get a better use of Cone Beam CT (CBCT) images for ART, the CBCT system model was established based on Monte Carlo program and validated against the measurement. The BEAMnrc program was adopted to the KV x-ray tube. Both IOURCE-13 and ISOURCE-24 were chosen to simulate the path of beam particles. The measured Percentage Depth Dose (PDD) and lateral dose profiles under 1cm water were compared with the dose calculated by DOSXYZnrc program. The calculated PDD was better than 1% within the depth of 10cm. More than 85% points of calculated lateral dose profiles was within 2%. The correct CBCT system model helps to improve CBCT image quality for dose verification in ART and assess the CBCT image concomitant dose risk.
Comparative Monte Carlo analysis of InP- and GaN-based Gunn diodes
García, S.; Pérez, S.; Íñiguez-de-la-Torre, I.; Mateos, J.; González, T.
2014-01-01
In this work, we report on Monte Carlo simulations to study the capability to generate Gunn oscillations of diodes based on InP and GaN with around 1 μm active region length. We compare the power spectral density of current sequences in diodes with and without notch for different lengths and two doping profiles. It is found that InP structures provide 400 GHz current oscillations for the fundamental harmonic in structures without notch and around 140 GHz in notched diodes. On the other hand, GaN diodes can operate up to 300 GHz for the fundamental harmonic, and when the notch is effective, a larger number of harmonics, reaching the Terahertz range, with higher spectral purity than in InP diodes are generated. Therefore, GaN-based diodes offer a high power alternative for sub-millimeter wave Gunn oscillations.
Quasi-Monte Carlo Simulation-Based SFEM for Slope Reliability Analysis
Institute of Scientific and Technical Information of China (English)
Yu Yuzhen; Xie Liquan; Zhang Bingyin
2005-01-01
Considering the stochastic spatial variation of geotechnical parameters over the slope, a Stochastic Finite Element Method (SFEM) is established based on the combination of the Shear Strength Reduction (SSR) concept and quasi-Monte Carlo simulation. The shear strength reduction FEM is superior to the slice method based on the limit equilibrium theory in many ways, so it will be more powerful to assess the reliability of global slope stability when combined with probability theory. To illustrate the performance of the proposed method, it is applied to an example of simple slope. The results of simulation show that the proposed method is effective to perform the reliability analysis of global slope stability without presupposing a potential slip surface.
Sign learning kink-based (SiLK) quantum Monte Carlo for molecular systems
Ma, Xiaoyao; Loffler, Frank; Kowalski, Karol; Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana
2015-01-01
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H$_{2}$O, N$_2$, and F$_2$ molecules. The method is based on Feynman's path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems
Energy Technology Data Exchange (ETDEWEB)
Ma, Xiaoyao [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Hall, Randall W. [Department of Natural Sciences and Mathematics, Dominican University of California, San Rafael, California 94901 (United States); Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Löffler, Frank [Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Kowalski, Karol [William R. Wiley Environmental Molecular Sciences Laboratory, Battelle, Pacific Northwest National Laboratory, Richland, Washington 99352 (United States); Bhaskaran-Nair, Kiran; Jarrell, Mark; Moreno, Juana [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803 (United States); Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803 (United States)
2016-01-07
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H{sub 2}O, N{sub 2}, and F{sub 2} molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
Sign Learning Kink-based (SiLK) Quantum Monte Carlo for molecular systems
Energy Technology Data Exchange (ETDEWEB)
Ma, Xiaoyao [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Hall, Randall W. [Department of Natural Sciences and Mathematics, Dominican University of California, San Rafael, California 94901, USA; Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Löffler, Frank [Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Kowalski, Karol [William R. Wiley Environmental Molecular Sciences Laboratory, Battelle, Pacific Northwest National Laboratory, Richland, Washington 99352, USA; Bhaskaran-Nair, Kiran [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Jarrell, Mark [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Moreno, Juana [Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA; Center for Computation and Technology, Louisiana State University, Baton Rouge, Louisiana 70803, USA
2016-01-07
The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources
Townson, Reid W.; Jia, Xun; Tian, Zhen; Jiang Graves, Yan; Zavgorodni, Sergei; Jiang, Steve B.
2013-06-01
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
GPU-based Monte Carlo radiotherapy dose calculation using phase-space sources.
Townson, Reid W; Jia, Xun; Tian, Zhen; Graves, Yan Jiang; Zavgorodni, Sergei; Jiang, Steve B
2013-06-21
A novel phase-space source implementation has been designed for graphics processing unit (GPU)-based Monte Carlo dose calculation engines. Short of full simulation of the linac head, using a phase-space source is the most accurate method to model a clinical radiation beam in dose calculations. However, in GPU-based Monte Carlo dose calculations where the computation efficiency is very high, the time required to read and process a large phase-space file becomes comparable to the particle transport time. Moreover, due to the parallelized nature of GPU hardware, it is essential to simultaneously transport particles of the same type and similar energies but separated spatially to yield a high efficiency. We present three methods for phase-space implementation that have been integrated into the most recent version of the GPU-based Monte Carlo radiotherapy dose calculation package gDPM v3.0. The first method is to sequentially read particles from a patient-dependent phase-space and sort them on-the-fly based on particle type and energy. The second method supplements this with a simple secondary collimator model and fluence map implementation so that patient-independent phase-space sources can be used. Finally, as the third method (called the phase-space-let, or PSL, method) we introduce a novel source implementation utilizing pre-processed patient-independent phase-spaces that are sorted by particle type, energy and position. Position bins located outside a rectangular region of interest enclosing the treatment field are ignored, substantially decreasing simulation time with little effect on the final dose distribution. The three methods were validated in absolute dose against BEAMnrc/DOSXYZnrc and compared using gamma-index tests (2%/2 mm above the 10% isodose). It was found that the PSL method has the optimal balance between accuracy and efficiency and thus is used as the default method in gDPM v3.0. Using the PSL method, open fields of 4 × 4, 10 × 10 and 30 × 30 cm
Modelling of scintillator based flat-panel detectors with Monte-Carlo simulations
Reims, N.; Sukowski, F.; Uhlmann, N.
2011-01-01
Scintillator based flat panel detectors are state of the art in the field of industrial X-ray imaging applications. Choosing the proper system and setup parameters for the vast range of different applications can be a time consuming task, especially when developing new detector systems. Since the system behaviour cannot always be foreseen easily, Monte-Carlo (MC) simulations are keys to gain further knowledge of system components and their behaviour for different imaging conditions. In this work we used two Monte-Carlo based models to examine an indirect converting flat panel detector, specifically the Hamamatsu C9312SK. We focused on the signal generation in the scintillation layer and its influence on the spatial resolution of the whole system. The models differ significantly in their level of complexity. The first model gives a global description of the detector based on different parameters characterizing the spatial resolution. With relatively small effort a simulation model can be developed which equates the real detector regarding signal transfer. The second model allows a more detailed insight of the system. It is based on the well established cascade theory, i.e. describing the detector as a cascade of elemental gain and scattering stages, which represent the built in components and their signal transfer behaviour. In comparison to the first model the influence of single components especially the important light spread behaviour in the scintillator can be analysed in a more differentiated way. Although the implementation of the second model is more time consuming both models have in common that a relatively small amount of system manufacturer parameters are needed. The results of both models were in good agreement with the measured parameters of the real system.
Nanoscale field effect optical modulators based on depletion of epsilon-near-zero films
Lu, Zhaolin; Shi, Kaifeng; Yin, Peichuan
2016-12-01
The field effect in metal-oxide-semiconductor (MOS) capacitors plays a key role in field-effect transistors (FETs), which are the fundamental building blocks of modern digital integrated circuits. Recent works show that the field effect can also be used to make optical/plasmonic modulators. In this paper, we report the numerical investigation of field effect electro-absorption modulators each made of an ultrathin epsilon-near-zero (ENZ) film, as the active material, sandwiched in a silicon or plasmonic waveguide. Without a bias, the ENZ films maximize the attenuation of the waveguides and the modulators work at the OFF state; on the other hand, depletion of the carriers in the ENZ films greatly reduces the attenuation and the modulators work at the ON state. The double capacitor gating scheme with two 10-nm HfO2 films as the insulator is used to enhance the modulation by the field effect. The depletion requires about 10 V across the HfO2 layers. According to our simulation, extinction ratio up to 3.44 dB can be achieved in a 500-nm long Si waveguide with insertion loss only 0.71 dB (85.0% pass); extinction ratio up to 7.86 dB can be achieved in a 200-nm long plasmonic waveguide with insertion loss 1.11 dB (77.5% pass). The proposed modulators may find important applications in future on-chip or chip-to-chip optical interconnection.
Pattern-oriented Agent-based Monte Carlo simulation of Cellular Redox Environment
DEFF Research Database (Denmark)
Tang, Jiaowei; Holcombe, Mike; Boonen, Harrie C.M.
Research suggests that cellular redox environment could affect the phenotype and function of cells through a complex reaction network[1]. In cells, redox status is mainly regulated by several redox couples, such as Glutathione/glutathione disulfide (GSH/GSSG), Cysteine/ Cystine (CYS......, that there is a connection between extracellular and intracellular redox [2], whereas others oppose this view [3]. In general however, these experiments lack insight into the dynamics, complex network of reactions and transportation through cell membrane of redox. Therefore, current experimental results reveal......] could be very important factors. In our project, an agent-based Monte Carlo modeling [6] is offered to study the dynamic relationship between extracellular and intracellular redox and complex networks of redox reactions. In the model, pivotal redox-related reactions will be included, and the reactants...
A Monte Carlo-based treatment-planning tool for ion beam therapy
Böhlen, T T; Dosanjh, M; Ferrari, A; Haberer, T; Parodi, K; Patera, V; Mairan, A
2013-01-01
Ion beam therapy, as an emerging radiation therapy modality, requires continuous efforts to develop and improve tools for patient treatment planning (TP) and research applications. Dose and fluence computation algorithms using the Monte Carlo (MC) technique have served for decades as reference tools for accurate dose computations for radiotherapy. In this work, a novel MC-based treatment-planning (MCTP) tool for ion beam therapy using the pencil beam scanning technique is presented. It allows single-field and simultaneous multiple-fields optimization for realistic patient treatment conditions and for dosimetric quality assurance for irradiation conditions at state-of-the-art ion beam therapy facilities. It employs iterative procedures that allow for the optimization of absorbed dose and relative biological effectiveness (RBE)-weighted dose using radiobiological input tables generated by external RBE models. Using a re-implementation of the local effect model (LEM), theMCTP tool is able to perform TP studies u...
Ma, X. B.; Qiu, R. M.; Chen, Y. X.
2017-02-01
Uncertainties regarding fission fractions are essential in understanding antineutrino flux predictions in reactor antineutrino experiments. A new Monte Carlo-based method to evaluate the covariance coefficients between isotopes is proposed. The covariance coefficients are found to vary with reactor burnup and may change from positive to negative because of balance effects in fissioning. For example, between 235U and 239Pu, the covariance coefficient changes from 0.15 to -0.13. Using the equation relating fission fraction and atomic density, consistent uncertainties in the fission fraction and covariance matrix were obtained. The antineutrino flux uncertainty is 0.55%, which does not vary with reactor burnup. The new value is about 8.3% smaller.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI
Lui, Dorothy; Haider, Masoom; Wong, Alexander
2015-01-01
Background: Prostate cancer is one of the most common forms of cancer found in males making early diagnosis important. Magnetic resonance imaging (MRI) has been useful in visualizing and localizing tumor candidates and with the use of endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The coils introduce intensity inhomogeneities and the surface coil intensity correction built into MRI scanners is used to reduce these inhomogeneities. However, the correction typically performed at the MRI scanner level leads to noise amplification and noise level variations. Methods: In this study, we introduce a new Monte Carlo-based noise compensation approach for coil intensity corrected endorectal MRI which allows for effective noise compensation and preservation of details within the prostate. The approach accounts for the ERC SNR profile via a spatially-adaptive noise model for correcting non-stationary noise variations. Such a method is useful particularly for improving the image quality of coil i...
Development of a Monte-Carlo based method for calculating the effect of stationary fluctuations
DEFF Research Database (Denmark)
Pettersen, E. E.; Demazire, C.; Jareteg, K.;
2015-01-01
This paper deals with the development of a novel method for performing Monte Carlo calculations of the effect, on the neutron flux, of stationary fluctuations in macroscopic cross-sections. The basic principle relies on the formulation of two equivalent problems in the frequency domain: one...... equivalent problems nevertheless requires the possibility to modify the macroscopic cross-sections, and we use the work of Kuijper, van der Marck and Hogenbirk to define group-wise macroscopic cross-sections in MCNP [1]. The method is illustrated in this paper at a frequency of 1 Hz, for which only the real...... part of the neutron balance plays a significant role and for driving fluctuations leading to neutron sources having the same sign in the two equivalent sub-critical problems. A semi-analytical diffusion-based solution is used to verily the implementation of the method on a test case representative...
Auxiliary-field based trial wave functions in quantum Monte Carlo simulations
Chang, Chia-Chen; Rubenstein, Brenda; Morales, Miguel
We propose a simple scheme for generating correlated multi-determinant trial wave functions for quantum Monte Carlo algorithms. The method is based on the Hubbard-Stratonovich transformation which decouples a two-body Jastrow-type correlator into one-body projectors coupled to auxiliary fields. We apply the technique to generate stochastic representations of the Gutzwiller wave function, and present benchmark resuts for the ground state energy of the Hubbard model in one dimension. Extensions of the proposed scheme to chemical systems will also be discussed. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344, 15-ERD-013.
Considerable variation in NNT - A study based on Monte Carlo simulations
DEFF Research Database (Denmark)
Wisloff, T.; Aalen, O. O.; Sønbø Kristiansen, Ivar
2011-01-01
Objective: The aim of this analysis was to explore the variation in measures of effect, such as the number-needed-to-treat (NNT) and the relative risk (RR). Study Design and Setting: We performed Monte Carlo simulations of therapies using binominal distributions based on different true absolute...... risk reductions (ARR), number of patients (n), and the baseline risk of adverse events (p(0)) as parameters and presented results in histograms with NNT and RR. We also estimated the probability of observing no or a negative treatment effect, given that the true effect is positive. Results: When RR...... is used to express treatment effectiveness, it has a regular distribution around the expected value for various values of true ARR, n, and p(0). The equivalent distribution of NNT is by definition nonconnected at zero and is also irregular. The probability that the observed treatment effectiveness is zero...
GPU-based fast Monte Carlo simulation for radiotherapy dose calculation
Jia, Xun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B
2011-01-01
Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress towards the development a GPU-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original DPM code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. High performance random number generator and hardware linear interpolation are also utilized. We have also developed various components to hand...
Nanoscale Field Effect Optical Modulators Based on Depletion of Epsilon-Near-Zero Films
Lu, Zhaolin; Shi, Kaifeng
2015-01-01
The field effect in metal-oxide-semiconductor (MOS) capacitors plays a key role in field-effect transistors (FETs), which are the fundamental building blocks of modern digital integrated circuits. Recent works show that the field effect can also be used to make optical/plasmonic modulators. In this paper, we report field effect electro-absorption modulators (FEOMs) each made of an ultrathin epsilon-near-zero (ENZ) film, as the active material, sandwiched in a silicon or plasmonic waveguide. Without a bias, the ENZ film maximizes the attenuation of the waveguides and the modulators work at the OFF state; contrariwise, depletion of the carriers in the ENZ film greatly reduces the attenuation and the modulators work at the ON state. The double capacitor gating scheme is used to enhance the modulation by the field effect. According to our simulation, extinction ratio up to 3.44 dB can be achieved in a 500-nm long Si waveguide with insertion loss only 0.71 dB (85.0%); extinction ratio up to 7.86 dB can be achieved...
Full modelling of the MOSAIC animal PET system based on the GATE Monte Carlo simulation code
Merheb, C.; Petegnief, Y.; Talbot, J. N.
2007-02-01
within 9%. For a 410-665 keV energy window, the measured sensitivity for a centred point source was 1.53% and mouse and rat scatter fractions were respectively 12.0% and 18.3%. The scattered photons produced outside the rat and mouse phantoms contributed to 24% and 36% of total simulated scattered coincidences. Simulated and measured single and prompt count rates agreed well for activities up to the electronic saturation at 110 MBq for the mouse and rat phantoms. Volumetric spatial resolution was 17.6 µL at the centre of the FOV with differences less than 6% between experimental and simulated spatial resolution values. The comprehensive evaluation of the Monte Carlo modelling of the Mosaic™ system demonstrates that the GATE package is adequately versatile and appropriate to accurately describe the response of an Anger logic based animal PET system.
A zero-variance based scheme for Monte Carlo criticality simulations
Christoforou, S.
2010-01-01
The ability of the Monte Carlo method to solve particle transport problems by simulating the particle behaviour makes it a very useful technique in nuclear reactor physics. However, the statistical nature of Monte Carlo implies that there will always be a variance associated with the estimate obtain
DEFF Research Database (Denmark)
Klösgen, Beate; Bruun, Sara; Hansen, Søren;
with an AFM (2). The intuitive explanation for the depletion based on "hydrophobic mismatch" between the obviously hydrophilic bulk phase of water next to the hydrophobic polymer. It would thus be an intrinsic property of all interfaces between non-matching materials. The detailed physical interaction path...... The presence of a depletion layer of water along extended hydrophobic interfaces, and a possibly related formation of nanobubbles, is an ongoing discussion. The phenomenon was initially reported when we, years ago, chose thick films (~300-400Å) of polystyrene as cushions between a crystalline...
Institute of Scientific and Technical Information of China (English)
Xu Xiao-Bo; Zhang He-Ming; Hu Hui-Yong; Ma Jian-Li; Xu Li-Jun
2011-01-01
The base-collector depletion capacitance for vertical SiGe npn heterojunction bipolar transistors (HBTs) on silicon on insulator (SOI) is split into vertical and lateral parts. This paper proposes a novel analytical depletion capacitance model of this structure for the first time. A large discrepancy is predicted when the present model is compared with the conventional depletion model, and it is shown that the capacitance decreases with the increase of the reverse collectorbase bias-and shows a kink as the reverse collector-base bias reaches the effective vertical punch-through voltage while the voltage differs with the collector doping concentrations, which is consistent with measurement results. The model can be employed for a fast evaluation of the depletion capacitance of an SOI SiGe HBT and has useful applications on the design and simulation of high performance SiGe circuits and devices.
Ultrafast cone-beam CT scatter correction with GPU-based Monte Carlo simulation
Directory of Open Access Journals (Sweden)
Yuan Xu
2014-03-01
Full Text Available Purpose: Scatter artifacts severely degrade image quality of cone-beam CT (CBCT. We present an ultrafast scatter correction framework by using GPU-based Monte Carlo (MC simulation and prior patient CT image, aiming at automatically finish the whole process including both scatter correction and reconstruction within 30 seconds.Methods: The method consists of six steps: 1 FDK reconstruction using raw projection data; 2 Rigid Registration of planning CT to the FDK results; 3 MC scatter calculation at sparse view angles using the planning CT; 4 Interpolation of the calculated scatter signals to other angles; 5 Removal of scatter from the raw projections; 6 FDK reconstruction using the scatter-corrected projections. In addition to using GPU to accelerate MC photon simulations, we also use a small number of photons and a down-sampled CT image in simulation to further reduce computation time. A novel denoising algorithm is used to eliminate MC noise from the simulated scatter images caused by low photon numbers. The method is validated on one simulated head-and-neck case with 364 projection angles.Results: We have examined variation of the scatter signal among projection angles using Fourier analysis. It is found that scatter images at 31 angles are sufficient to restore those at all angles with < 0.1% error. For the simulated patient case with a resolution of 512 × 512 × 100, we simulated 5 × 106 photons per angle. The total computation time is 20.52 seconds on a Nvidia GTX Titan GPU, and the time at each step is 2.53, 0.64, 14.78, 0.13, 0.19, and 2.25 seconds, respectively. The scatter-induced shading/cupping artifacts are substantially reduced, and the average HU error of a region-of-interest is reduced from 75.9 to 19.0 HU.Conclusion: A practical ultrafast MC-based CBCT scatter correction scheme is developed. It accomplished the whole procedure of scatter correction and reconstruction within 30 seconds.----------------------------Cite this
Monte carlo method-based QSAR modeling of penicillins binding to human serum proteins.
Veselinović, Jovana B; Toropov, Andrey A; Toropova, Alla P; Nikolić, Goran M; Veselinović, Aleksandar M
2015-01-01
The binding of penicillins to human serum proteins was modeled with optimal descriptors based on the Simplified Molecular Input-Line Entry System (SMILES). The concentrations of protein-bound drug for 87 penicillins expressed as percentage of the total plasma concentration were used as experimental data. The Monte Carlo method was used as a computational tool to build up the quantitative structure-activity relationship (QSAR) model for penicillins binding to plasma proteins. One random data split into training, test and validation set was examined. The calculated QSAR model had the following statistical parameters: r(2) = 0.8760, q(2) = 0.8665, s = 8.94 for the training set and r(2) = 0.9812, q(2) = 0.9753, s = 7.31 for the test set. For the validation set, the statistical parameters were r(2) = 0.727 and s = 12.52, but after removing the three worst outliers, the statistical parameters improved to r(2) = 0.921 and s = 7.18. SMILES-based molecular fragments (structural indicators) responsible for the increase and decrease of penicillins binding to plasma proteins were identified. The possibility of using these results for the computer-aided design of new penicillins with desired binding properties is presented.
The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran
Kargaran, Hamed; Minuchehr, Abdolhamid; Zolfaghari, Ahmad
2016-04-01
The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG) have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core) for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.
The development of GPU-based parallel PRNG for Monte Carlo applications in CUDA Fortran
Directory of Open Access Journals (Sweden)
Hamed Kargaran
2016-04-01
Full Text Available The implementation of Monte Carlo simulation on the CUDA Fortran requires a fast random number generation with good statistical properties on GPU. In this study, a GPU-based parallel pseudo random number generator (GPPRNG have been proposed to use in high performance computing systems. According to the type of GPU memory usage, GPU scheme is divided into two work modes including GLOBAL_MODE and SHARED_MODE. To generate parallel random numbers based on the independent sequence method, the combination of middle-square method and chaotic map along with the Xorshift PRNG have been employed. Implementation of our developed PPRNG on a single GPU showed a speedup of 150x and 470x (with respect to the speed of PRNG on a single CPU core for GLOBAL_MODE and SHARED_MODE, respectively. To evaluate the accuracy of our developed GPPRNG, its performance was compared to that of some other commercially available PPRNGs such as MATLAB, FORTRAN and Miller-Park algorithm through employing the specific standard tests. The results of this comparison showed that the developed GPPRNG in this study can be used as a fast and accurate tool for computational science applications.
A Monte Carlo-based treatment planning tool for proton therapy
Mairani, A.; Böhlen, T. T.; Schiavi, A.; Tessonnier, T.; Molinelli, S.; Brons, S.; Battistoni, G.; Parodi, K.; Patera, V.
2013-04-01
In the field of radiotherapy, Monte Carlo (MC) particle transport calculations are recognized for their superior accuracy in predicting dose and fluence distributions in patient geometries compared to analytical algorithms which are generally used for treatment planning due to their shorter execution times. In this work, a newly developed MC-based treatment planning (MCTP) tool for proton therapy is proposed to support treatment planning studies and research applications. It allows for single-field and simultaneous multiple-field optimization in realistic treatment scenarios and is based on the MC code FLUKA. Relative biological effectiveness (RBE)-weighted dose is optimized either with the common approach using a constant RBE of 1.1 or using a variable RBE according to radiobiological input tables. A validated reimplementation of the local effect model was used in this work to generate radiobiological input tables. Examples of treatment plans in water phantoms and in patient-CT geometries together with an experimental dosimetric validation of the plans are presented for clinical treatment parameters as used at the Italian National Center for Oncological Hadron Therapy. To conclude, a versatile MCTP tool for proton therapy was developed and validated for realistic patient treatment scenarios against dosimetric measurements and commercial analytical TP calculations. It is aimed to be used in future for research and to support treatment planning at state-of-the-art ion beam therapy facilities.
Zhai, Xue; Fei, Cheng-Wei; Choy, Yat-Sze; Wang, Jian-Jun
2017-01-01
To improve the accuracy and efficiency of computation model for complex structures, the stochastic model updating (SMU) strategy was proposed by combining the improved response surface model (IRSM) and the advanced Monte Carlo (MC) method based on experimental static test, prior information and uncertainties. Firstly, the IRSM and its mathematical model were developed with the emphasis on moving least-square method, and the advanced MC simulation method is studied based on Latin hypercube sampling method as well. And then the SMU procedure was presented with experimental static test for complex structure. The SMUs of simply-supported beam and aeroengine stator system (casings) were implemented to validate the proposed IRSM and advanced MC simulation method. The results show that (1) the SMU strategy hold high computational precision and efficiency for the SMUs of complex structural system; (2) the IRSM is demonstrated to be an effective model due to its SMU time is far less than that of traditional response surface method, which is promising to improve the computational speed and accuracy of SMU; (3) the advanced MC method observably decrease the samples from finite element simulations and the elapsed time of SMU. The efforts of this paper provide a promising SMU strategy for complex structure and enrich the theory of model updating.
Saha, Krishnendu; Straus, Kenneth J; Chen, Yu; Glick, Stephen J
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Energy Technology Data Exchange (ETDEWEB)
Baba, Justin S [ORNL; John, Dwayne O [ORNL; Koju, Vijay [ORNL
2015-01-01
The propagation of light in turbid media is an active area of research with relevance to numerous investigational fields, e.g., biomedical diagnostics and therapeutics. The statistical random-walk nature of photon propagation through turbid media is ideal for computational based modeling and simulation. Ready access to super computing resources provide a means for attaining brute force solutions to stochastic light-matter interactions entailing scattering by facilitating timely propagation of sufficient (>10million) photons while tracking characteristic parameters based on the incorporated physics of the problem. One such model that works well for isotropic but fails for anisotropic scatter, which is the case for many biomedical sample scattering problems, is the diffusion approximation. In this report, we address this by utilizing Berry phase (BP) evolution as a means for capturing anisotropic scattering characteristics of samples in the preceding depth where the diffusion approximation fails. We extend the polarization sensitive Monte Carlo method of Ramella-Roman, et al.,1 to include the computationally intensive tracking of photon trajectory in addition to polarization state at every scattering event. To speed-up the computations, which entail the appropriate rotations of reference frames, the code was parallelized using OpenMP. The results presented reveal that BP is strongly correlated to the photon penetration depth, thus potentiating the possibility of polarimetric depth resolved characterization of highly scattering samples, e.g., biological tissues.
Seismic wavefield imaging based on the replica exchange Monte Carlo method
Kano, Masayuki; Nagao, Hiromichi; Ishikawa, Daichi; Ito, Shin-ichi; Sakai, Shin'ichi; Nakagawa, Shigeki; Hori, Muneo; Hirata, Naoshi
2017-01-01
Earthquakes sometimes cause serious disasters not only directly by ground motion itself but also secondarily by infrastructure damage, particularly in densely populated urban areas that have capital functions. To reduce the number and severity of secondary disasters, it is important to evaluate seismic hazards rapidly by analysing the seismic responses of individual structures to input ground motions. We propose a method that integrates physics-based and data-driven approaches in order to obtain a seismic wavefield for use as input to a seismic response analysis. The new contribution of this study is the use of the replica exchange Monte Carlo (REMC) method, which is one of the Markov chain Monte Carlo (MCMC) methods, for estimation of a seismic wavefield, together with a 1-D local subsurface structure and source information. Numerical tests were conducted to verify the proposed method, using synthetic observation data obtained from analytical solutions for two horizontally layered subsurface structure models. The geometries of the observation sites were determined from the dense seismic observation array called the Metropolitan Seismic Observation network, which has been in operation in the Tokyo metropolitan area in Japan since 2007. The results of the numerical tests show that the proposed method is able to search the parameters related to the source and the local subsurface structure in a broader parameter space than the Metropolis method, which is an ordinary MCMC method. The proposed method successfully reproduces a seismic wavefield consistent with a true wavefield. In contrast, ordinary kriging, which is a classical data-driven interpolation method for spatial data, is hardly able to reproduce a true wavefield, even in the low frequency bands. This suggests that it is essential to employ both physics-based and data-driven approaches in seismic wavefield imaging, utilizing seismograms from a dense seismic array. The REMC method, which provides not only
Seismic wavefield imaging based on the replica exchange Monte Carlo method
Kano, Masayuki; Nagao, Hiromichi; Ishikawa, Daichi; Ito, Shin-ichi; Sakai, Shin'ichi; Nakagawa, Shigeki; Hori, Muneo; Hirata, Naoshi
2016-11-01
Earthquakes sometimes cause serious disasters not only directly by ground motion itself but also secondarily by infrastructure damage, particularly in densely populated urban areas that have capital functions. To reduce the number and severity of secondary disasters, it is important to evaluate seismic hazards rapidly by analyzing the seismic responses of individual structures to input ground motions. We propose a method that integrates physics-based and data-driven approaches in order to obtain a seismic wavefield for use as input to a seismic response analysis. The new contribution of this study is the use of the replica exchange Monte Carlo (REMC) method, which is one of the Markov chain Monte Carlo (MCMC) methods, for estimation of a seismic wavefield, together with a one-dimensional (1-D) local subsurface structure and source information. Numerical tests were conducted to verify the proposed method, using synthetic observation data obtained from analytical solutions for two horizontally-layered subsurface structure models. The geometries of the observation sites were determined from the dense seismic observation array called the Metropolitan Seismic Observation network (MeSO-net), which has been in operation in the Tokyo metropolitan area in Japan since 2007. The results of the numerical tests show that the proposed method is able to search the parameters related to the source and the local subsurface structure in a broader parameter space than the Metropolis method, which is an ordinary MCMC method. The proposed method successfully reproduces a seismic wavefield consistent with a true wavefield. In contrast, ordinary kriging, which is a classical data-driven interpolation method for spatial data, is hardly able to reproduce a true wavefield, even in the low frequency bands. This suggests that it is essential to employ both physics-based and data-driven approaches in seismic wavefield imaging, utilizing seismograms from a dense seismic array. The REMC method
Nievaart, V. A.; Daquino, G. G.; Moss, R. L.
2007-06-01
Boron Neutron Capture Therapy (BNCT) is a bimodal form of radiotherapy for the treatment of tumour lesions. Since the cancer cells in the treatment volume are targeted with 10B, a higher dose is given to these cancer cells due to the 10B(n,α)7Li reaction, in comparison with the surrounding healthy cells. In Petten (The Netherlands), at the High Flux Reactor, a specially tailored neutron beam has been designed and installed. Over 30 patients have been treated with BNCT in 2 clinical protocols: a phase I study for the treatment of glioblastoma multiforme and a phase II study on the treatment of malignant melanoma. Furthermore, activities concerning the extra-corporal treatment of metastasis in the liver (from colorectal cancer) are in progress. The irradiation beam at the HFR contains both neutrons and gammas that, together with the complex geometries of both patient and beam set-up, demands for very detailed treatment planning calculations. A well designed Treatment Planning System (TPS) should obey the following general scheme: (1) a pre-processing phase (CT and/or MRI scans to create the geometric solid model, cross-section files for neutrons and/or gammas); (2) calculations (3D radiation transport, estimation of neutron and gamma fluences, macroscopic and microscopic dose); (3) post-processing phase (displaying of the results, iso-doses and -fluences). Treatment planning in BNCT is performed making use of Monte Carlo codes incorporated in a framework, which includes also the pre- and post-processing phases. In particular, the glioblastoma multiforme protocol used BNCT_rtpe, while the melanoma metastases protocol uses NCTPlan. In addition, an ad hoc Positron Emission Tomography (PET) based treatment planning system (BDTPS) has been implemented in order to integrate the real macroscopic boron distribution obtained from PET scanning. BDTPS is patented and uses MCNP as the calculation engine. The precision obtained by the Monte Carlo based TPSs exploited at Petten
Eutrophication of mangroves linked to depletion of foliar and soil base cations.
Fauzi, Anas; Skidmore, Andrew K; Heitkönig, Ignas M A; van Gils, Hein; Schlerf, Martin
2014-12-01
There is growing concern that increasing eutrophication causes degradation of coastal ecosystems. Studies in terrestrial ecosystems have shown that increasing the concentration of nitrogen in soils contributes to the acidification process, which leads to leaching of base cations. To test the effects of eutrophication on the availability of base cations in mangroves, we compared paired leaf and soil nutrient levels sampled in Nypa fruticans and Rhizophora spp. on a severely disturbed, i.e. nutrient loaded, site (Mahakam delta) with samples from an undisturbed, near-pristine site (Berau delta) in East Kalimantan, Indonesia. The findings indicate that under pristine conditions, the availability of base cations in mangrove soils is determined largely by salinity. Anthropogenic disturbances on the Mahakam site have resulted in eutrophication, which is related to lower levels of foliar and soil base cations. Path analysis suggests that increasing soil nitrogen reduces soil pH, which in turn reduces the levels of foliar and soil base cations in mangroves.
Development and validation of MCNPX-based Monte Carlo treatment plan verification system
Iraj Jabbari; Shahram Monadi
2015-01-01
A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation ...
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)
2014-08-15
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
Kudrolli, Haris A.
2001-04-01
A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates
Qin, Nan; Botas, Pablo; Giantsoudi, Drosoula; Schuemann, Jan; Tian, Zhen; Jiang, Steve B.; Paganetti, Harald; Jia, Xun
2016-10-01
Monte Carlo (MC) simulation is commonly considered as the most accurate dose calculation method for proton therapy. Aiming at achieving fast MC dose calculations for clinical applications, we have previously developed a graphics-processing unit (GPU)-based MC tool, gPMC. In this paper, we report our recent updates on gPMC in terms of its accuracy, portability, and functionality, as well as comprehensive tests on this tool. The new version, gPMC v2.0, was developed under the OpenCL environment to enable portability across different computational platforms. Physics models of nuclear interactions were refined to improve calculation accuracy. Scoring functions of gPMC were expanded to enable tallying particle fluence, dose deposited by different particle types, and dose-averaged linear energy transfer (LETd). A multiple counter approach was employed to improve efficiency by reducing the frequency of memory writing conflict at scoring. For dose calculation, accuracy improvements over gPMC v1.0 were observed in both water phantom cases and a patient case. For a prostate cancer case planned using high-energy proton beams, dose discrepancies in beam entrance and target region seen in gPMC v1.0 with respect to the gold standard tool for proton Monte Carlo simulations (TOPAS) results were substantially reduced and gamma test passing rate (1%/1 mm) was improved from 82.7%-93.1%. The average relative difference in LETd between gPMC and TOPAS was 1.7%. The average relative differences in the dose deposited by primary, secondary, and other heavier particles were within 2.3%, 0.4%, and 0.2%. Depending on source proton energy and phantom complexity, it took 8-17 s on an AMD Radeon R9 290x GPU to simulate {{10}7} source protons, achieving less than 1% average statistical uncertainty. As the beam size was reduced from 10 × 10 cm2 to 1 × 1 cm2, the time on scoring was only increased by 4.8% with eight counters, in contrast to a 40% increase using only
Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries
Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann
2011-07-01
There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.
A global reaction route mapping-based kinetic Monte Carlo algorithm
Mitchell, Izaac; Irle, Stephan; Page, Alister J.
2016-07-01
We propose a new on-the-fly kinetic Monte Carlo (KMC) method that is based on exhaustive potential energy surface searching carried out with the global reaction route mapping (GRRM) algorithm. Starting from any given equilibrium state, this GRRM-KMC algorithm performs a one-step GRRM search to identify all surrounding transition states. Intrinsic reaction coordinate pathways are then calculated to identify potential subsequent equilibrium states. Harmonic transition state theory is used to calculate rate constants for all potential pathways, before a standard KMC accept/reject selection is performed. The selected pathway is then used to propagate the system forward in time, which is calculated on the basis of 1st order kinetics. The GRRM-KMC algorithm is validated here in two challenging contexts: intramolecular proton transfer in malonaldehyde and surface carbon diffusion on an iron nanoparticle. We demonstrate that in both cases the GRRM-KMC method is capable of reproducing the 1st order kinetics observed during independent quantum chemical molecular dynamics simulations using the density-functional tight-binding potential.
Absorbed Dose Calculations Using Mesh-based Human Phantoms And Monte Carlo Methods
Kramer, Richard
2011-08-01
Health risks attributable to the exposure to ionizing radiation are considered to be a function of the absorbed or equivalent dose to radiosensitive organs and tissues. However, as human tissue cannot express itself in terms of equivalent dose, exposure models have to be used to determine the distribution of equivalent dose throughout the human body. An exposure model, be it physical or computational, consists of a representation of the human body, called phantom, plus a method for transporting ionizing radiation through the phantom and measuring or calculating the equivalent dose to organ and tissues of interest. The FASH2 (Female Adult meSH) and the MASH2 (Male Adult meSH) computational phantoms have been developed at the University of Pernambuco in Recife/Brazil based on polygon mesh surfaces using open source software tools and anatomical atlases. Representing standing adults, FASH2 and MASH2 have organ and tissue masses, body height and body mass adjusted to the anatomical data published by the International Commission on Radiological Protection for the reference male and female adult. For the purposes of absorbed dose calculations the phantoms have been coupled to the EGSnrc Monte Carlo code, which can transport photons, electrons and positrons through arbitrary media. This paper reviews the development of the FASH2 and the MASH2 phantoms and presents dosimetric applications for X-ray diagnosis and for prostate brachytherapy.
IVF cycle cost estimation using Activity Based Costing and Monte Carlo simulation.
Cassettari, Lucia; Mosca, Marco; Mosca, Roberto; Rolando, Fabio; Costa, Mauro; Pisaturo, Valerio
2016-03-01
The Authors present a new methodological approach in stochastic regime to determine the actual costs of an healthcare process. The paper specifically shows the application of the methodology for the determination of the cost of an Assisted reproductive technology (ART) treatment in Italy. The reason of this research comes from the fact that deterministic regime is inadequate to implement an accurate estimate of the cost of this particular treatment. In fact the durations of the different activities involved are unfixed and described by means of frequency distributions. Hence the need to determine in addition to the mean value of the cost, the interval within which it is intended to vary with a known confidence level. Consequently the cost obtained for each type of cycle investigated (in vitro fertilization and embryo transfer with or without intracytoplasmic sperm injection), shows tolerance intervals around the mean value sufficiently restricted as to make the data obtained statistically robust and therefore usable also as reference for any benchmark with other Countries. It should be noted that under a methodological point of view the approach was rigorous. In fact it was used both the technique of Activity Based Costing for determining the cost of individual activities of the process both the Monte Carlo simulation, with control of experimental error, for the construction of the tolerance intervals on the final result.
Monte Carlo based verification of a beam model used in a treatment planning system
Wieslander, E.; Knöös, T.
2008-02-01
Modern treatment planning systems (TPSs) usually separate the dose modelling into a beam modelling phase, describing the beam exiting the accelerator, followed by a subsequent dose calculation in the patient. The aim of this work is to use the Monte Carlo code system EGSnrc to study the modelling of head scatter as well as the transmission through multi-leaf collimator (MLC) and diaphragms in the beam model used in a commercial TPS (MasterPlan, Nucletron B.V.). An Elekta Precise linear accelerator equipped with an MLC has been modelled in BEAMnrc, based on available information from the vendor regarding the material and geometry of the treatment head. The collimation in the MLC direction consists of leafs which are complemented with a backup diaphragm. The characteristics of the electron beam, i.e., energy and spot size, impinging on the target have been tuned to match measured data. Phase spaces from simulations of the treatment head are used to extract the scatter from, e.g., the flattening filter and the collimating structures. Similar data for the source models used in the TPS are extracted from the treatment planning system, thus a comprehensive analysis is possible. Simulations in a water phantom, with DOSXYZnrc, are also used to study the modelling of the MLC and the diaphragms by the TPS. The results from this study will be helpful to understand the limitations of the model in the TPS and provide knowledge for further improvements of the TPS source modelling.
Iravanian, Shahriar; Kanu, Uche B; Christini, David J
2012-07-01
Cardiac repolarization alternans is an electrophysiologic condition identified by a beat-to-beat fluctuation in action potential waveform. It has been mechanistically linked to instances of T-wave alternans, a clinically defined ECG alternation in T-wave morphology, and associated with the onset of cardiac reentry and sudden cardiac death. Many alternans detection algorithms have been proposed in the past, but the majority have been designed specifically for use with T-wave alternans. Action potential duration (APD) signals obtained from experiments (especially those derived from optical mapping) possess unique characteristics, which requires the development and use of a more appropriate alternans detection method. In this paper, we present a new class of algorithms, based on the Monte Carlo method, for the detection and quantitative measurement of alternans. Specifically, we derive a set of algorithms (one an analytical and more efficient version of the other) and compare its performance with the standard spectral method and the generalized likelihood ratio test algorithm using synthetic APD sequences and optical mapping data obtained from an alternans control experiment. We demonstrate the benefits of the new algorithm in the presence of Gaussian and Laplacian noise and frame-shift errors. The proposed algorithms are well suited for experimental applications, and furthermore, have low complexity and are implementable using fixed-point arithmetic, enabling potential use with implantable cardiac devices.
Geant4-based Monte Carlo simulations on GPU for medical applications.
Bert, Julien; Perez-Ponce, Hector; El Bitar, Ziad; Jan, Sébastien; Boursier, Yannick; Vintache, Damien; Bonissent, Alain; Morel, Christian; Brasse, David; Visvikis, Dimitris
2013-08-21
Monte Carlo simulation (MCS) plays a key role in medical applications, especially for emission tomography and radiotherapy. However MCS is also associated with long calculation times that prevent its use in routine clinical practice. Recently, graphics processing units (GPU) became in many domains a low cost alternative for the acquisition of high computational power. The objective of this work was to develop an efficient framework for the implementation of MCS on GPU architectures. Geant4 was chosen as the MCS engine given the large variety of physics processes available for targeting different medical imaging and radiotherapy applications. In addition, Geant4 is the MCS engine behind GATE which is actually the most popular medical applications' simulation platform. We propose the definition of a global strategy and associated structures for such a GPU based simulation implementation. Different photon and electron physics effects are resolved on the fly directly on GPU without any approximations with respect to Geant4. Validations have shown equivalence in the underlying photon and electron physics processes between the Geant4 and the GPU codes with a speedup factor of 80-90. More clinically realistic simulations in emission and transmission imaging led to acceleration factors of 400-800 respectively compared to corresponding GATE simulations.
Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations
Chang, Chia-Chen; Rubenstein, Brenda M.; Morales, Miguel A.
2016-12-01
Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wave function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-01
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU’s shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
Chiavassa, S; Aubineau-Lanièce, I; Bitar, A; Lisbona, A; Barbet, J; Franck, D; Jourdain, J R; Bardiès, M
2006-02-07
Dosimetric studies are necessary for all patients treated with targeted radiotherapy. In order to attain the precision required, we have developed Oedipe, a dosimetric tool based on the MCNPX Monte Carlo code. The anatomy of each patient is considered in the form of a voxel-based geometry created using computed tomography (CT) images or magnetic resonance imaging (MRI). Oedipe enables dosimetry studies to be carried out at the voxel scale. Validation of the results obtained by comparison with existing methods is complex because there are multiple sources of variation: calculation methods (different Monte Carlo codes, point kernel), patient representations (model or specific) and geometry definitions (mathematical or voxel-based). In this paper, we validate Oedipe by taking each of these parameters into account independently. Monte Carlo methodology requires long calculation times, particularly in the case of voxel-based geometries, and this is one of the limits of personalized dosimetric methods. However, our results show that the use of voxel-based geometry as opposed to a mathematically defined geometry decreases the calculation time two-fold, due to an optimization of the MCNPX2.5e code. It is therefore possible to envisage the use of Oedipe for personalized dosimetry in the clinical context of targeted radiotherapy.
Energy Technology Data Exchange (ETDEWEB)
Weathers, J.B. [Shock, Noise, and Vibration Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: James.Weathers@ngc.com; Luck, R. [Department of Mechanical Engineering, Mississippi State University, 210 Carpenter Engineering Building, P.O. Box ME, Mississippi State, MS 39762-5925 (United States)], E-mail: Luck@me.msstate.edu; Weathers, J.W. [Structural Analysis Group, Northrop Grumman Shipbuilding, P.O. Box 149, Pascagoula, MS 39568 (United States)], E-mail: Jeffrey.Weathers@ngc.com
2009-11-15
The complexity of mathematical models used by practicing engineers is increasing due to the growing availability of sophisticated mathematical modeling tools and ever-improving computational power. For this reason, the need to define a well-structured process for validating these models against experimental results has become a pressing issue in the engineering community. This validation process is partially characterized by the uncertainties associated with the modeling effort as well as the experimental results. The net impact of the uncertainties on the validation effort is assessed through the 'noise level of the validation procedure', which can be defined as an estimate of the 95% confidence uncertainty bounds for the comparison error between actual experimental results and model-based predictions of the same quantities of interest. Although general descriptions associated with the construction of the noise level using multivariate statistics exists in the literature, a detailed procedure outlining how to account for the systematic and random uncertainties is not available. In this paper, the methodology used to derive the covariance matrix associated with the multivariate normal pdf based on random and systematic uncertainties is examined, and a procedure used to estimate this covariance matrix using Monte Carlo analysis is presented. The covariance matrices are then used to construct approximate 95% confidence constant probability contours associated with comparison error results for a practical example. In addition, the example is used to show the drawbacks of using a first-order sensitivity analysis when nonlinear local sensitivity coefficients exist. Finally, the example is used to show the connection between the noise level of the validation exercise calculated using multivariate and univariate statistics.
Monte Carlo-based revised values of dose rate constants at discrete photon energies
Directory of Open Access Journals (Sweden)
T Palani Selvam
2014-01-01
Full Text Available Absorbed dose rate to water at 0.2 cm and 1 cm due to a point isotropic photon source as a function of photon energy is calculated using the EDKnrc user-code of the EGSnrc Monte Carlo system. This code system utilized widely used XCOM photon cross-section dataset for the calculation of absorbed dose to water. Using the above dose rates, dose rate constants are calculated. Air-kerma strength S k needed for deriving dose rate constant is based on the mass-energy absorption coefficient compilations of Hubbell and Seltzer published in the year 1995. A comparison of absorbed dose rates in water at the above distances to the published values reflects the differences in photon cross-section dataset in the low-energy region (difference is up to 2% in dose rate values at 1 cm in the energy range 30-50 keV and up to 4% at 0.2 cm at 30 keV. A maximum difference of about 8% is observed in the dose rate value at 0.2 cm at 1.75 MeV when compared to the published value. S k calculations based on the compilation of Hubbell and Seltzer show a difference of up to 2.5% in the low-energy region (20-50 keV when compared to the published values. The deviations observed in the values of dose rate and S k affect the values of dose rate constants up to 3%.
Directory of Open Access Journals (Sweden)
Vahid Moslemi
2011-03-01
Full Text Available Introduction: In brachytherapy, radioactive sources are placed close to the tumor, therefore, small changes in their positions can cause large changes in the dose distribution. This emphasizes the need for computerized treatment planning. The usual method for treatment planning of cervix brachytherapy uses conventional radiographs in the Manchester system. Nowadays, because of their advantages in locating the source positions and the surrounding tissues, CT and MRI images are replacing conventional radiographs. In this study, we used CT images in Monte Carlo based dose calculation for brachytherapy treatment planning, using an interface software to create the geometry file required in the MCNP code. The aim of using the interface software is to facilitate and speed up the geometry set-up for simulations based on the patient’s anatomy. This paper examines the feasibility of this method in cervix brachytherapy and assesses its accuracy and speed. Material and Methods: For dosimetric measurements regarding the treatment plan, a pelvic phantom was made from polyethylene in which the treatment applicators could be placed. For simulations using CT images, the phantom was scanned at 120 kVp. Using an interface software written in MATLAB, the CT images were converted into MCNP input file and the simulation was then performed. Results: Using the interface software, preparation time for the simulations of the applicator and surrounding structures was approximately 3 minutes; the corresponding time needed in the conventional MCNP geometry entry being approximately 1 hour. The discrepancy in the simulated and measured doses to point A was 1.7% of the prescribed dose. The corresponding dose differences between the two methods in rectum and bladder were 3.0% and 3.7% of the prescribed dose, respectively. Comparing the results of simulation using the interface software with those of simulation using the standard MCNP geometry entry showed a less than 1
Directory of Open Access Journals (Sweden)
Joko Siswantoro
2014-11-01
Full Text Available Volume is one of important issues in the production and processing of food product. Traditionally, volume measurement can be performed using water displacement method based on Archimedes’ principle. Water displacement method is inaccurate and considered as destructive method. Computer vision offers an accurate and nondestructive method in measuring volume of food product. This paper proposes algorithm for volume measurement of irregular shape food product using computer vision based on Monte Carlo method. Five images of object were acquired from five different views and then processed to obtain the silhouettes of object. From the silhouettes of object, Monte Carlo method was performed to approximate the volume of object. The simulation result shows that the algorithm produced high accuracy and precision for volume measurement.
Monte-Carlo simulation of an ultra small-angle neutron scattering instrument based on Soller slits
Energy Technology Data Exchange (ETDEWEB)
Rieker, T. [Univ. of New Mexico, Albuquerque, NM (United States); Hubbard, P. [Sandia National Labs., Albuquerque, NM (United States)
1997-09-01
Monte Carlo simulations are used to investigate an ultra small-angle neutron scattering instrument for use at a pulsed source based on a Soller slit collimator and analyzer. The simulations show that for a q{sub min} of {approximately}le-4 {angstrom}{sup -1} (15 {angstrom} neutrons) a few tenths of a percent of the incident flux is transmitted through both collimators at q=0.
Vrshek-Schallhorn, Suzanne; Wahlstrom, Dustin; White, Tonya; Luciana, Monica
2013-04-01
Despite interest in dopamine's role in emotion-based decision-making, few reports of the effects of dopamine manipulations are available in this area in humans. This study investigates dopamine's role in emotion-based decision-making through a common measure of this construct, the Iowa Gambling Task (IGT), using Acute Tyrosine Phenylalanine Depletion (ATPD). In a between-subjects design, 40 healthy adults were randomized to receive either an ATPD beverage or a balanced amino acid beverage (a control) prior to completing the IGT, as well as pre- and post-manipulation blood draws for the neurohormone prolactin. Together with conventional IGT performance metrics, choice selections and response latencies were examined separately for good and bad choices before and after several key punishment events. Changes in response latencies were also used to predict total task performance. Prolactin levels increased significantly in the ATPD group but not in the control group. However, no significant group differences in performance metrics were detected, nor were there sex differences in outcome measures. However, the balanced group's bad deck latencies speeded up across the task, while the ATPD group's latencies remained adaptively hesitant. Additionally, modulation of latencies to the bad decks predicted total score for the ATPD group only. One interpretation is that ATPD subtly attenuated reward salience and altered the approach by which individuals achieved successful performance, without resulting in frank group differences in task performance.
Monte Carlo based protocol for cell survival and tumour control probability in BNCT
Ye, Sung-Joon
1999-02-01
A mathematical model to calculate the theoretical cell survival probability (nominally, the cell survival fraction) is developed to evaluate preclinical treatment conditions for boron neutron capture therapy (BNCT). A treatment condition is characterized by the neutron beam spectra, single or bilateral exposure, and the choice of boron carrier drug (boronophenylalanine (BPA) or boron sulfhydryl hydride (BSH)). The cell survival probability defined from Poisson statistics is expressed with the cell-killing yield, the (n, ) reaction density, and the tolerable neutron fluence. The radiation transport calculation from the neutron source to tumours is carried out using Monte Carlo methods: (i) reactor-based BNCT facility modelling to yield the neutron beam library at an irradiation port; (ii) dosimetry to limit the neutron fluence below a tolerance dose (10.5 Gy-Eq); (iii) calculation of the (n, ) reaction density in tumours. A shallow surface tumour could be effectively treated by single exposure producing an average cell survival probability of - for probable ranges of the cell-killing yield for the two drugs, while a deep tumour will require bilateral exposure to achieve comparable cell kills at depth. With very pure epithermal beams eliminating thermal, low epithermal and fast neutrons, the cell survival can be decreased by factors of 2-10 compared with the unmodified neutron spectrum. A dominant effect of cell-killing yield on tumour cell survival demonstrates the importance of choice of boron carrier drug. However, these calculations do not indicate an unambiguous preference for one drug, due to the large overlap of tumour cell survival in the probable ranges of the cell-killing yield for the two drugs. The cell survival value averaged over a bulky tumour volume is used to predict the overall BNCT therapeutic efficacy, using a simple model of tumour control probability (TCP).
GPU-based fast Monte Carlo dose calculation for proton therapy.
Jia, Xun; Schümann, Jan; Paganetti, Harald; Jiang, Steve B
2012-12-07
Accurate radiation dose calculation is essential for successful proton radiotherapy. Monte Carlo (MC) simulation is considered to be the most accurate method. However, the long computation time limits it from routine clinical applications. Recently, graphics processing units (GPUs) have been widely used to accelerate computationally intensive tasks in radiotherapy. We have developed a fast MC dose calculation package, gPMC, for proton dose calculation on a GPU. In gPMC, proton transport is modeled by the class II condensed history simulation scheme with a continuous slowing down approximation. Ionization, elastic and inelastic proton nucleus interactions are considered. Energy straggling and multiple scattering are modeled. Secondary electrons are not transported and their energies are locally deposited. After an inelastic nuclear interaction event, a variety of products are generated using an empirical model. Among them, charged nuclear fragments are terminated with energy locally deposited. Secondary protons are stored in a stack and transported after finishing transport of the primary protons, while secondary neutral particles are neglected. gPMC is implemented on the GPU under the CUDA platform. We have validated gPMC using the TOPAS/Geant4 MC code as the gold standard. For various cases including homogeneous and inhomogeneous phantoms as well as a patient case, good agreements between gPMC and TOPAS/Geant4 are observed. The gamma passing rate for the 2%/2 mm criterion is over 98.7% in the region with dose greater than 10% maximum dose in all cases, excluding low-density air regions. With gPMC it takes only 6-22 s to simulate 10 million source protons to achieve ∼1% relative statistical uncertainty, depending on the phantoms and energy. This is an extremely high efficiency compared to the computational time of tens of CPU hours for TOPAS/Geant4. Our fast GPU-based code can thus facilitate the routine use of MC dose calculation in proton therapy.
GPU-based fast Monte Carlo simulation for radiotherapy dose calculation.
Jia, Xun; Gu, Xuejun; Graves, Yan Jiang; Folkerts, Michael; Jiang, Steve B
2011-11-21
Monte Carlo (MC) simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications. In this paper, we present our recent progress toward the development of a graphics processing unit (GPU)-based MC dose calculation package, gDPM v2.0. It utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original dose planning method (DPM) code and hence the same level of simulation accuracy. In GPU computing, divergence of execution paths between threads can considerably reduce the efficiency. Since photons and electrons undergo different physics and hence attain different execution paths, we use a simulation scheme where photon transport and electron transport are separated to partially relieve the thread divergence issue. A high-performance random number generator and a hardware linear interpolation are also utilized. We have also developed various components to handle the fluence map and linac geometry, so that gDPM can be used to compute dose distributions for realistic IMRT or VMAT treatment plans. Our gDPM package is tested for its accuracy and efficiency in both phantoms and realistic patient cases. In all cases, the average relative uncertainties are less than 1%. A statistical t-test is performed and the dose difference between the CPU and the GPU results is not found to be statistically significant in over 96% of the high dose region and over 97% of the entire region. Speed-up factors of 69.1 ∼ 87.2 have been observed using an NVIDIA Tesla C2050 GPU card against a 2.27 GHz Intel Xeon CPU processor. For realistic IMRT and VMAT plans, MC dose calculation can be completed with less than 1% standard deviation in 36.1 ∼ 39.6 s using gDPM.
Monte Carlo based NMR simulations of open fractures in porous media
Lukács, Tamás; Balázs, László
2014-05-01
According to the basic principles of nuclear magnetic resonance (NMR), a measurement's free induction decay curve has an exponential characteristic and its parameter is the transversal relaxation time, T2, given by the Bloch equations in rotating frame. In our simulations we are observing that particular case when the bulk's volume is neglectable to the whole system, the vertical movement is basically zero, hence the diffusion part of the T2 relation can be editted out. This small-apertured situations are common in sedimentary layers, and the smallness of the observed volume enable us to calculate with just the bulk relaxation and the surface relaxation. The simulation uses the Monte-Carlo method, so it is based on a random-walk generator which provides the brownian motions of the particles by uniformly distributed, pseudorandom generated numbers. An attached differential equation assures the bulk relaxation, the initial and the iterated conditions guarantee the simulation's replicability and enable having consistent estimations. We generate an initial geometry of a plain segment with known height, with given number of particles, the spatial distribution is set to equal to each simulation, and the surface-volume ratio remains at a constant value. It follows that to the given thickness of the open fracture, from the fitted curve's parameter, the surface relaxivity is determinable. The calculated T2 distribution curves are also indicating the inconstancy in the observed fracture situations. The effect of varying the height of the lamina at a constant diffusion coefficient also produces characteristic anomaly and for comparison we have run the simulation with the same initial volume, number of particles and conditions in spherical bulks, their profiles are clear and easily to understand. The surface relaxation enables us to estimate the interaction beetwen the materials of boundary with this two geometrically well-defined bulks, therefore the distribution takes as a
Jenkins, C. M.; Godang, R.; Cavaglia, M.; Cremaldi, L.; Summers, D.
2008-10-01
The 14 TeV center of mass proton-proton collisions at the LHC opens the possibility for new Physics, including the possible formation of microscopic black holes. A Fortran-based Monte Carlo event generator program called CATFISH (Collider grAviTational FIeld Simulator for black Holes) has been developed at the University of Mississippi to study signatures of microscopic black hole production (http://www.phy.olemiss.edu/GR/catfish). This black hole event generator includes many of the currently accepted theoretical results for microscopic black hole formation. High energy physics data analysis is shifting from Fortran to C++ as the CERN data analysis packages HBOOK and PAW are no longer supported. The C++ based root is replacing these packages. Work done at the University of South Alabama has resulted in a successful inclusion of CATFISH into root. The methods used to interface the Fortran-based CATFISH into the C++ based root will be presented. Benchmark histograms will be presented demonstrating the conversion. Preliminary results will be presented for selecting black hole candidate events in 14 TeV/ center of mass proton-proton collisions.
Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope
Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao
2015-10-01
X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through
The information-based complexity of approximation problem by adaptive Monte Carlo methods
Institute of Scientific and Technical Information of China (English)
2008-01-01
In this paper, we study the complexity of information of approximation problem on the multivariate Sobolev space with bounded mixed derivative MWpr,α(Td), 1 < p < ∞, in the norm of Lq(Td), 1 < q < ∞, by adaptive Monte Carlo methods. Applying the discretization technique and some properties of pseudo-s-scale, we determine the exact asymptotic orders of this problem.
On the Assessment of Monte Carlo Error in Simulation-Based Statistical Analyses.
Koehler, Elizabeth; Brown, Elizabeth; Haneuse, Sebastien J-P A
2009-05-01
Statistical experiments, more commonly referred to as Monte Carlo or simulation studies, are used to study the behavior of statistical methods and measures under controlled situations. Whereas recent computing and methodological advances have permitted increased efficiency in the simulation process, known as variance reduction, such experiments remain limited by their finite nature and hence are subject to uncertainty; when a simulation is run more than once, different results are obtained. However, virtually no emphasis has been placed on reporting the uncertainty, referred to here as Monte Carlo error, associated with simulation results in the published literature, or on justifying the number of replications used. These deserve broader consideration. Here we present a series of simple and practical methods for estimating Monte Carlo error as well as determining the number of replications required to achieve a desired level of accuracy. The issues and methods are demonstrated with two simple examples, one evaluating operating characteristics of the maximum likelihood estimator for the parameters in logistic regression and the other in the context of using the bootstrap to obtain 95% confidence intervals. The results suggest that in many settings, Monte Carlo error may be more substantial than traditionally thought.
van der Graaf, E. R.; Limburg, J.; Koomans, R. L.; Tijs, M.
2011-01-01
The calibration of scintillation detectors for gamma radiation in a well characterized setup can be transferred to other geometries using Monte Carlo simulations to account for the differences between the calibration and the other geometry. In this study a calibration facility was used that is const
Stimulated scintillation emission depletion X-ray imaging.
Alekhin, M S; Patton, G; Dujardin, C; Douissard, P-A; Lebugle, M; Novotny, L; Stampanoni, M
2017-01-23
X-ray microtomography is a widely applied tool for noninvasive structure investigations. The related detectors are usually based on a scintillator screen for the fast in situ conversion of an X-ray image into an optical image. Spatial resolution of the latter is fundamentally diffraction limited. In this work, we introduce stimulated scintillation emission depletion (SSED) X-ray imaging where, similar to stimulated emission depletion (STED) microscopy, a depletion beam is applied to the scintillator screen to overcome the diffraction limit. The requirements for the X-ray source, the X-ray flux, the scintillator screen, and the STED beam were evaluated. Fundamental spatial resolution limits due to the spread of absorbed X-ray energy were estimated with Monte Carlo simulations. The SSED proof-of-concept experiments demonstrated 1) depletion of X-ray excited scintillation, 2) partial confinement of scintillating regions to sub-diffraction sized volumes, and 3) improvement of the imaging contrast by applying SSED.
Wang, Song; Gardner, Joseph K; Gordon, John J; Li, Weidong; Clews, Luke; Greer, Peter B; Siebers, Jeffrey V
2009-08-01
The aim of this study is to present an efficient method to generate imager-specific Monte Carlo (MC)-based dose kernels for amorphous silicon-based electronic portal image device dose prediction and determine the effective backscattering thicknesses for such imagers. EPID field size-dependent responses were measured for five matched Varian accelerators from three institutions with 6 MV beams at the source to detector distance (SDD) of 105 cm. For two imagers, measurements were made with and without the imager mounted on the robotic supporting arm. Monoenergetic energy deposition kernels with 0-2.5 cm of water backscattering thicknesses were simultaneously computed by MC to a high precision. For each imager, the backscattering thickness required to match measured field size responses was determined. The monoenergetic kernel method was validated by comparing measured and predicted field size responses at 150 cm SDD, 10 x 10 cm2 multileaf collimator (MLC) sliding window fields created with 5, 10, 20, and 50 mm gaps, and a head-and-neck (H&N) intensity modulated radiation therapy (IMRT) patient field. Field size responses for the five different imagers deviated by up to 1.3%. When imagers were removed from the robotic arms, response deviations were reduced to 0.2%. All imager field size responses were captured by using between 1.0 and 1.6 cm backscatter. The predicted field size responses by the imager-specific kernels matched measurements for all involved imagers with the maximal deviation of 0.34%. The maximal deviation between the predicted and measured field size responses at 150 cm SDD is 0.39%. The maximal deviation between the predicted and measured MLC sliding window fields is 0.39%. For the patient field, gamma analysis yielded that 99.0% of the pixels have gamma < 1 by the 2%, 2 mm criteria with a 3% dose threshold. Tunable imager-specific kernels can be generated rapidly and accurately in a single MC simulation. The resultant kernels are imager position
Uncertainties in Monte Carlo-based absorbed dose calculations for an experimental benchmark.
Renner, F; Wulff, J; Kapsch, R-P; Zink, K
2015-10-01
There is a need to verify the accuracy of general purpose Monte Carlo codes like EGSnrc, which are commonly employed for investigations of dosimetric problems in radiation therapy. A number of experimental benchmarks have been published to compare calculated values of absorbed dose to experimentally determined values. However, there is a lack of absolute benchmarks, i.e. benchmarks without involved normalization which may cause some quantities to be cancelled. Therefore, at the Physikalisch-Technische Bundesanstalt a benchmark experiment was performed, which aimed at the absolute verification of radiation transport calculations for dosimetry in radiation therapy. A thimble-type ionization chamber in a solid phantom was irradiated by high-energy bremsstrahlung and the mean absorbed dose in the sensitive volume was measured per incident electron of the target. The characteristics of the accelerator and experimental setup were precisely determined and the results of a corresponding Monte Carlo simulation with EGSnrc are presented within this study. For a meaningful comparison, an analysis of the uncertainty of the Monte Carlo simulation is necessary. In this study uncertainties with regard to the simulation geometry, the radiation source, transport options of the Monte Carlo code and specific interaction cross sections are investigated, applying the general methodology of the Guide to the expression of uncertainty in measurement. Besides studying the general influence of changes in transport options of the EGSnrc code, uncertainties are analyzed by estimating the sensitivity coefficients of various input quantities in a first step. Secondly, standard uncertainties are assigned to each quantity which are known from the experiment, e.g. uncertainties for geometric dimensions. Data for more fundamental quantities such as photon cross sections and the I-value of electron stopping powers are taken from literature. The significant uncertainty contributions are identified as
Benhamou, Ygal; Paintaud, Gilles; Azoulay, Elie; Poullin, Pascale; Galicier, Lionel; Desvignes, Céline; Baudel, Jean-Luc; Peltier, Julie; Mira, Jean-Paul; Pène, Frédéric; Presne, Claire; Saheb, Samir; Deligny, Christophe; Rousseau, Alexandra; Féger, Frédéric; Veyradier, Agnès; Coppo, Paul
2016-12-01
The standard four-rituximab infusions treatment in acquired thrombotic thrombocytopenic purpura (TTP) remains empirical. Peripheral B cell depletion is correlated with the decrease in serum concentrations of anti-ADAMTS13 and associated with clinical response. To assess the efficacy of a rituximab regimen based on B cell depletion, 24 TTP patients were enrolled in this prospective multicentre single arm phase II study and then compared to patients from a previous study. Patients with a suboptimal response to a plasma exchange-based regimen received two infusions of rituximab 375 mg m(-2) within 4 days, and a third dose at day +15 of the first infusion if peripheral B cells were still detectable. Primary endpoint was the assessment of the time required to platelet count recovery from the first plasma exchange. Three patients died after the first rituximab administration. In the remaining patients, the B cell-driven treatment hastened remission and ADAMTS13 activity recovery as a result of rapid anti-ADAMTS13 depletion in a similar manner to the standard four-rituximab infusions schedule. The 1-year relapse-free survival was also comparable between both groups. A rituximab regimen based on B cell depletion is feasible and provides comparable results than with the four-rituximab infusions schedule. This regimen could represent a new standard in TTP. This trial was registered at www.clinicaltrials.gov (NCT00907751). Am. J. Hematol. 91:1246-1251, 2016. © 2016 Wiley Periodicals, Inc.
Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K
2011-12-01
Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation.
Charek, Daniel B; Meyer, Gregory J; Mihura, Joni L
2016-10-01
We investigated the impact of ego depletion on selected Rorschach cognitive processing variables and self-reported affect states. Research indicates acts of effortful self-regulation transiently deplete a finite pool of cognitive resources, impairing performance on subsequent tasks requiring self-regulation. We predicted that relative to controls, ego-depleted participants' Rorschach protocols would have more spontaneous reactivity to color, less cognitive sophistication, and more frequent logical lapses in visualization, whereas self-reports would reflect greater fatigue and less attentiveness. The hypotheses were partially supported; despite a surprising absence of self-reported differences, ego-depleted participants had Rorschach protocols with lower scores on two variables indicative of sophisticated combinatory thinking, as well as higher levels of color receptivity; they also had lower scores on a composite variable computed across all hypothesized markers of complexity. In addition, self-reported achievement striving moderated the effect of the experimental manipulation on color receptivity, and in the Depletion condition it was associated with greater attentiveness to the tasks, more color reactivity, and less global synthetic processing. Results are discussed with an emphasis on the response process, methodological limitations and strengths, implications for calculating refined Rorschach scores, and the value of using multiple methods in research and experimental paradigms to validate assessment measures.
Monte Carlo study of single-barrier structure based on exclusion model full counting statistics
Institute of Scientific and Technical Information of China (English)
Chen Hua; Du Lei; Qu Cheng-Li; He Liang; Chen Wen-Hao; Sun Peng
2011-01-01
Different from the usual full counting statistics theoretical work that focuses on the higher order cumulants computation by using cumulant generating function in electrical structures, Monte Carlo simulation of single-barrier structure is performed to obtain time series for two types of widely applicable exclusion models, counter-flows model,and tunnel model. With high-order spectrum analysis of Matlab, the validation of Monte Carlo methods is shown through the extracted first four cumulants from the time series, which are in agreement with those from cumulant generating function. After the comparison between the counter-flows model and the tunnel model in a single barrier structure, it is found that the essential difference between them consists in the strictly holding of Pauli principle in the former and in the statistical consideration of Pauli principle in the latter.
Numerical Study of Light Transport in Apple Models Based on Monte Carlo Simulations
Directory of Open Access Journals (Sweden)
Mohamed Lamine Askoura
2015-12-01
Full Text Available This paper reports on the quantification of light transport in apple models using Monte Carlo simulations. To this end, apple was modeled as a two-layer spherical model including skin and flesh bulk tissues. The optical properties of both tissue types used to generate Monte Carlo data were collected from the literature, and selected to cover a range of values related to three apple varieties. Two different imaging-tissue setups were simulated in order to show the role of the skin on steady-state backscattering images, spatially-resolved reflectance profiles, and assessment of flesh optical properties using an inverse nonlinear least squares fitting algorithm. Simulation results suggest that apple skin cannot be ignored when a Visible/Near-Infrared (Vis/NIR steady-state imaging setup is used for investigating quality attributes of apples. They also help to improve optical inspection techniques in the horticultural products.
Doronin, Alexander; Rushmeier, Holly E.; Meglinski, Igor; Bykov, Alexander V.
2016-03-01
We present a new Monte Carlo based approach for the modelling of Bidirectional Scattering-Surface Reflectance Distribution Function (BSSRDF) for accurate rendering of human skin appearance. The variations of both skin tissues structure and the major chromophores are taken into account correspondingly to the different ethnic and age groups. The computational solution utilizes HTML5, accelerated by the graphics processing units (GPUs), and therefore is convenient for the practical use at the most of modern computer-based devices and operating systems. The results of imitation of human skin reflectance spectra, corresponding skin colours and examples of 3D faces rendering are presented and compared with the results of phantom studies.
Green-Function-Based Monte Carlo Method for Classical Fields Coupled to Fermions
Weiße, Alexander
2009-01-01
Microscopic models of classical degrees of freedom coupled to non-interacting fermions occur in many different contexts. Prominent examples from solid state physics are descriptions of colossal magnetoresistance manganites and diluted magnetic semiconductors, or auxiliary field methods for correlated electron systems. Monte Carlo simulations are vital for an understanding of such systems, but notorious for requiring the solution of the fermion problem with each change in the classical field c...
Using a Monte-Carlo-based approach to evaluate the uncertainty on fringe projection technique
Molimard, Jérôme
2013-01-01
A complete uncertainty analysis on a given fringe projection set-up has been performed using Monte-Carlo approach. In particular the calibration procedure is taken into account. Two applications are given: at a macroscopic scale, phase noise is predominant whilst at microscopic scale, both phase noise and calibration errors are important. Finally, uncertainty found at macroscopic scale is close to some experimental tests (~100 {\\mu}m).
The information-based complexity of approximation problem by adaptive Monte Carlo methods
Institute of Scientific and Technical Information of China (English)
FANG GenSun; DUAN LiQin
2008-01-01
In this paper,we study the complexity of information of approximation problem on the multivariate Sobolev space with bounded mixed derivative MWTp,α(Td),1＜p＜∞,in the norm of Lq(Td),1＜q＜∞,by adaptive Monte Carlo methods.Applying the discretization technique and some properties of pseudo-s-scale,we determine the exact asymptotic orders of this problem.
Monte Carlo tests of the Rasch model based on scalability coefficients
DEFF Research Database (Denmark)
Christensen, Karl Bang; Kreiner, Svend
2010-01-01
that summarizes the number of Guttman errors in the data matrix. These coefficients are shown to yield efficient tests of the Rasch model using p-values computed using Markov chain Monte Carlo methods. The power of the tests of unequal item discrimination, and their ability to distinguish between local dependence...... and unequal item discrimination, are discussed. The methods are illustrated and motivated using a simulation study and a real data example....
Development and validation of MCNPX-based Monte Carlo treatment plan verification system.
Jabbari, Iraj; Monadi, Shahram
2015-01-01
A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT) format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D) diode array (MapCHECK2) and gamma index analysis were used. The gamma passing rate (3%/3 mm) of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%). The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan.
Development and validation of MCNPX-based Monte Carlo treatment plan verification system
Directory of Open Access Journals (Sweden)
Iraj Jabbari
2015-01-01
Full Text Available A Monte Carlo treatment plan verification (MCTPV system was developed for clinical treatment plan verification (TPV, especially for the conformal and intensity-modulated radiotherapy (IMRT plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D diode array (MapCHECK2 and gamma index analysis were used. The gamma passing rate (3%/3 mm of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%. The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan.
Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy
Energy Technology Data Exchange (ETDEWEB)
Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)
2012-05-15
Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at
Wesseling, Sebastiaan; Joles, Jaap A.; van Goor, Harry; Bluyssen, Hans A.; Kemmeren, Patrick; Holstege, Frank C.; Koomans, Hein A.; Braam, Branko
2007-01-01
Nitric oxide (NO) depletion in rats induces severe endothelial dysfunction within 4 days. Subsequently, hypertension and renal injury develop, which are ameliorated by alpha-tocopherol (VitE) cotreatment. The hypothesis of the present study was that NO synthase (NOS) inhibition induces a renal corti
Zhang, Hua-Yu; Guo, Guang-Can; Sun, Fang-Wen
2016-01-01
The nitrogen vacancy (NV) center in diamond has been widely applied for quantum information and sensing in last decade. Based on the laser polarization dependent excitation of fluorescence emission, we propose a super-resolution microscopy of NV center. A series of wide field images of NV centers are taken with different polarizations of the linear polarized excitation laser. The fluorescence intensity of NV center is changed with the relative angle between excitation laser polarization and the orientation of NV center dipole. The images pumped by different excitation laser polarizations are analyzed with Monte Carlo method. Then the symmetry axis and position of NV center are obtained with sub-diffraction resolution.
Directory of Open Access Journals (Sweden)
Kohei Arai
2013-01-01
Full Text Available Monte Carlo Ray Tracing: MCRT based sensitivity analysis of the geophysical parameters (the atmosphere and the ocean on Top of the Atmosphere: TOA radiance in visible to near infrared wavelength regions is conducted. As the results, it is confirmed that the influence due to the atmosphere is greater than that of the ocean. Scattering and absorption due to aerosol particles and molecules in the atmosphere is major contribution followed by water vapor and ozone while scattering due to suspended solid is dominant contribution for the ocean parameters.
Random vibration analysis of switching apparatus based on Monte Carlo method
Institute of Scientific and Technical Information of China (English)
ZHAI Guo-fu; CHEN Ying-hua; REN Wan-bin
2007-01-01
The performance in vibration environment of switching apparatus containing mechanical contact is an important element when judging the apparatus's reliability. A piecewise linear two-degrees-of-freedom mathematical model considering contact loss was built in this work, and the vibration performance of the model under random external Gaussian white noise excitation was investigated by using Monte Carlo simulation in Matlab/Simulink. Simulation showed that the spectral content and statistical characters of the contact force coincided strongly with reality. The random vibration character of the contact system was solved using time (numerical) domain simulation in this paper. Conclusions reached here are of great importance for reliability design of switching apparatus.
Action orientation overcomes the ego depletion effect.
Dang, Junhua; Xiao, Shanshan; Shi, Yucai; Mao, Lihua
2015-04-01
It has been consistently demonstrated that initial exertion of self-control had negative influence on people's performance on subsequent self-control tasks. This phenomenon is referred to as the ego depletion effect. Based on action control theory, the current research investigated whether the ego depletion effect could be moderated by individuals' action versus state orientation. Our results showed that only state-oriented individuals exhibited ego depletion. For individuals with action orientation, however, their performance was not influenced by initial exertion of self-control. The beneficial effect of action orientation against ego depletion in our experiment results from its facilitation for adapting to the depleting task.
Energy Technology Data Exchange (ETDEWEB)
Al-Subeihi, Ala' A.A., E-mail: subeihi@yahoo.com [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); BEN-HAYYAN-Aqaba International Laboratories, Aqaba Special Economic Zone Authority (ASEZA), P. O. Box 2565, Aqaba 77110 (Jordan); Alhusainy, Wasma; Kiwamoto, Reiko; Spenkelink, Bert [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Bladeren, Peter J. van [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands); Nestec S.A., Avenue Nestlé 55, 1800 Vevey (Switzerland); Rietjens, Ivonne M.C.M.; Punt, Ans [Division of Toxicology, Wageningen University, Tuinlaan 5, 6703 HE Wageningen (Netherlands)
2015-03-01
The present study aims at predicting the level of formation of the ultimate carcinogenic metabolite of methyleugenol, 1′-sulfooxymethyleugenol, in the human population by taking variability in key bioactivation and detoxification reactions into account using Monte Carlo simulations. Depending on the metabolic route, variation was simulated based on kinetic constants obtained from incubations with a range of individual human liver fractions or by combining kinetic constants obtained for specific isoenzymes with literature reported human variation in the activity of these enzymes. The results of the study indicate that formation of 1′-sulfooxymethyleugenol is predominantly affected by variation in i) P450 1A2-catalyzed bioactivation of methyleugenol to 1′-hydroxymethyleugenol, ii) P450 2B6-catalyzed epoxidation of methyleugenol, iii) the apparent kinetic constants for oxidation of 1′-hydroxymethyleugenol, and iv) the apparent kinetic constants for sulfation of 1′-hydroxymethyleugenol. Based on the Monte Carlo simulations a so-called chemical-specific adjustment factor (CSAF) for intraspecies variation could be derived by dividing different percentiles by the 50th percentile of the predicted population distribution for 1′-sulfooxymethyleugenol formation. The obtained CSAF value at the 90th percentile was 3.2, indicating that the default uncertainty factor of 3.16 for human variability in kinetics may adequately cover the variation within 90% of the population. Covering 99% of the population requires a larger uncertainty factor of 6.4. In conclusion, the results showed that adequate predictions on interindividual human variation can be made with Monte Carlo-based PBK modeling. For methyleugenol this variation was observed to be in line with the default variation generally assumed in risk assessment. - Highlights: • Interindividual human differences in methyleugenol bioactivation were simulated. • This was done using in vitro incubations, PBK modeling
Energy Technology Data Exchange (ETDEWEB)
Burke, TImothy P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Martin, William R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Yuan, L G; Tang, Y Z; Zhang, Y X; Sun, J; Luo, X Y; Zhu, L X; Zhang, Z; Wang, R; Liu, Y H
2015-08-01
To estimate the valnemulin pharmacokinetic profile in a swine population and to assess a dosage regimen for increasing the likelihood of optimization. This study was, respectively, performed in 22 sows culled by p.o. administration and in 80 growing-finishing pigs by i.v. administration at a single dose of 10 mg/kg to develop a population pharmacokinetic model and Monte Carlo simulation. The relationships among the plasma concentration, dose, and time of valnemulin in pigs were illustrated as C(i,v) = X(0 )(8.4191 × 10(-4) × e(-0.2371t) + 1.2788 × 10(-5) × e(-0.0069t)) after i.v. and C(p.o) = X(0) (-8.4964 × 10(-4) × e(-0.5840t) + 8.4195 × e(-0.2371t) + 7.6869 × 10(-6) × e(-0.0069t)) after p.o. Monte Carlo simulation showed that T(>MIC) was more than 24 h when a single daily dosage at 13.5 mg/kg BW in pigs was administrated by p.o., and MIC was 0.031 mg/L. It was concluded that the current dosage regimen at 10-12 mg/kg BW led to valnemulin underexposure if the MIC was more than 0.031 mg/L and could increase the risk of treatment failure and/or drug resistance.
Effective Depletion Potential of Colloidal Spheres
Institute of Scientific and Technical Information of China (English)
LI Wei-Hua; MA Hong-Ru
2004-01-01
@@ A new semianalytical method, which is a combination of the density functional theory with Rosenfeld density functional and the Ornstein-Zernike equation, is proposed for the calculation of the effective depletion potentials between a pair of big spheres immersed in a small hard sphere fluid. The calculated results are almost identical to the integral equation method with the Percus-Yevick approximation, and are also in agreement well with the Monte Carlo simulation results.
Directory of Open Access Journals (Sweden)
Anna Russo
Full Text Available Short peptides can be designed in silico and synthesized through automated techniques, making them advantageous and versatile protein binders. A number of docking-based algorithms allow for a computational screening of peptides as binders. Here we developed ex-novo peptides targeting the maltose site of the Maltose Binding Protein, the prototypical system for the study of protein ligand recognition. We used a Monte Carlo based protocol, to computationally evolve a set of octapeptides starting from a polialanine sequence. We screened in silico the candidate peptides and characterized their binding abilities by surface plasmon resonance, fluorescence and electrospray ionization mass spectrometry assays. These experiments showed the designed binders to recognize their target with micromolar affinity. We finally discuss the obtained results in the light of further improvement in the ex-novo optimization of peptide based binders.
GPU-based Monte Carlo dust radiative transfer scheme applied to AGN
Heymann, Frank
2012-01-01
A three dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons (PAH). Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting procedure and at high spatial resolution by a vectorized ray-tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust...
Monte Carlo based dosimetry for neutron capture therapy of brain tumors
Zaidi, Lilia; Belgaid, Mohamed; Khelifi, Rachid
2016-11-01
Boron Neutron Capture Therapy (BNCT) is a biologically targeted, radiation therapy for cancer which combines neutron irradiation with a tumor targeting agent labeled with a boron10 having a high thermal neutron capture cross section. The tumor area is subjected to the neutron irradiation. After a thermal neutron capture, the excited 11B nucleus fissions into an alpha particle and lithium recoil nucleus. The high Linear Energy Transfer (LET) emitted particles deposit their energy in a range of about 10μm, which is of the same order of cell diameter [1], at the same time other reactions due to neutron activation with body component are produced. In-phantom measurement of physical dose distribution is very important for BNCT planning validation. Determination of total absorbed dose requires complex calculations which were carried out using the Monte Carlo MCNP code [2].
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector.
Cabal, Fatima Padilla; Lopez-Pino, Neivy; Bernal-Castillo, Jose Luis; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D'Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar
2010-12-01
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ((241)Am, (133)Ba, (22)Na, (60)Co, (57)Co, (137)Cs and (152)Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Energy Technology Data Exchange (ETDEWEB)
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Experimental validation of a rapid Monte Carlo based micro-CT simulator
Colijn, A. P.; Zbijewski, W.; Sasov, A.; Beekman, F. J.
2004-09-01
We describe a newly developed, accelerated Monte Carlo simulator of a small animal micro-CT scanner. Transmission measurements using aluminium slabs are employed to estimate the spectrum of the x-ray source. The simulator incorporating this spectrum is validated with micro-CT scans of physical water phantoms of various diameters, some containing stainless steel and Teflon rods. Good agreement is found between simulated and real data: normalized error of simulated projections, as compared to the real ones, is typically smaller than 0.05. Also the reconstructions obtained from simulated and real data are found to be similar. Thereafter, effects of scatter are studied using a voxelized software phantom representing a rat body. It is shown that the scatter fraction can reach tens of per cents in specific areas of the body and therefore scatter can significantly affect quantitative accuracy in small animal CT imaging.
Web-Based Parallel Monte Carlo Simulation Platform for Financial Computation
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Using Java, Java-enabled Web and object-oriented programming technologies, a framework is designed to organize multicomputer system on Intranet quickly to complete Monte Carlo simulation parallelizing. The high-performance computing environment is embedded in Web server so it can be accessed more easily. Adaptive parallelism and eager scheduling algorithm are used to realize load balancing, parallel processing and system fault-tolerance. Independent sequence pseudo-random number generator schemes to keep the parallel simulation availability. Three kinds of stock option pricing models as instances, ideal speedup and pricing results obtained on test bed. Now, as a Web service, a high-performance financial derivative security-pricing platform is set up for training and studying. The framework can also be used to develop other SPMD (single procedure multiple data) application. Robustness is still a major problem for further research.
基于Monte Carlo method的均衡度确定模型%Equilibrium Degree Determine Model based on the Monte Carlo method
Institute of Scientific and Technical Information of China (English)
朱颖; 程纪品
2012-01-01
The Monte Carlo method,also known as the statistical simulation method,is a very important kind of numerical methods guided by the theory of probability and statistics.It is applied to solve many computational problems using the random number （or pseudo-random number）.%蒙特卡罗方法（Monte Carlo method）,也称统计模拟方法,是一种以概率统计理论为指导的一类非常重要的数值计算方法,是指使用随机数（或更常见的伪随机数）来解决很多计算问题的方法,本文尝试建立警察服务平台的均衡度模型并用蒙特卡罗方法求解,实验结果可以满足一般的应用需求。
McMillan, Kyle; McNitt-Gray, Michael; Ruan, Dan
2013-01-01
underestimated measurements by 1.35%–5.31% (mean difference = −3.42%, SD = 1.09%). Conclusions: This work demonstrates the feasibility of using a measurement-based kV CBCT source model to facilitate dose calculations with Monte Carlo methods for both the radiographic and CBCT mode of operation. While this initial work validates simulations against measurements for simple geometries, future work will involve utilizing the source model to investigate kV CBCT dosimetry with more complex anthropomorphic phantoms and patient specific models. PMID:24320440
Saha, Sudip K; Guchhait, Asim; Pal, Amlan J
2014-03-07
We report the formation and characterization of hybrid pn-junction solar cells based on a layer of copper diffused silver indium disulfide (AgInS2@Cu) nanoparticles and another layer of copper phthalocyanine (CuPc) molecules. With copper diffusion in the nanocrystals, their optical absorption and hence the activity of the hybrid pn-junction solar cells was extended towards the near-IR region. To decrease the particle-to-particle separation for improved carrier transport through the inorganic layer, we replaced the long-chain ligands of copper-diffused nanocrystals in each monolayer with short-ones. Under illumination, the hybrid pn-junctions yielded a higher short-circuit current as compared to the combined contribution of the Schottky junctions based on the components. A wider depletion region at the interface between the two active layers in the pn-junction device as compared to that of the Schottky junctions has been considered to analyze the results. Capacitance-voltage characteristics under a dark condition supported such a hypothesis. We also determined the width of the depletion region in the two layers separately so that a pn-junction could be formed with a tailored thickness of the two materials. Such a "fully-depleted" device resulted in an improved photovoltaic performance, primarily due to lessening of the internal resistance of the hybrid pn-junction solar cells.
Li, Lanting; Wu, Runqing; Yan, Guoquan; Gao, Mingxia; Deng, Chunhui; Zhang, Xiangmin
2016-01-01
A novel method to isolate global N-termini using sulfydryl tagging and gold-nanoparticle-based depletion (STagAu method) is presented. The N-terminal and lysine amino groups were first completely dimethylated at the protein level, after which the proteins were digested. The newly generated internal peptides were tagged with sulfydryl by Traut's reagent through digested N-terminal amines in yields of 96%. The resulting sulfydryl peptides were depleted through binding onto nano gold composite materials. The Au-S bond is stable and widely used in materials science. Nano gold composite materials showed nearly complete depletion of sulfydryl peptides. A set of the acetylated and dimethylated N-terminal peptides were analyzed by liquid chromatography-tandem mass spectrometry. This method was demonstrated to be an efficient N-terminus enrichment method because of the use of an effective derivatization reaction, in combination with robust and relative easy to implement Au-S coupling. We identified 632 N-terminal peptides from 386 proteins in a mouse liver sample. The STagAu approach presented is therefore a facile and efficient method for mass-spectrometry-based analysis of proteome N-termini or protease-generated cleavage products.
Energy Technology Data Exchange (ETDEWEB)
Song, Wei [Nuclear and Radiation Safety Center, P. O. Box 8088, Beijing (China); Wu, Yuanyu [ITER Organization, Route de Vinon-sur-Verdon, 13115 Saint-Paul-lès-Durance (France); Hu, Wenjun [China Institute of Atomic Energy, P. O. Box 275(34), Beijing (China); Zuo, Jiaxu, E-mail: zuojiaxu@chinansc.cn [Nuclear and Radiation Safety Center, P. O. Box 8088, Beijing (China)
2015-11-15
Highlights: • Monte-Carlo principle coupling with fire dynamic code is adopted to perform sodium fire vulnerability assessment. • The method can be used to calculate the failure probability of sodium fire scenarios. • A calculation example and results are given to illustrate the feasibility of the methodology. • Some critical parameters and experience are shared. - Abstract: Sodium fire is a typical and distinctive hazard in sodium cooled fast reactors, which is significant for nuclear safety. In this paper, a method of sodium fire vulnerability assessment based on the Monte-Carlo principle was introduced, which could be used to calculate the probabilities of every failure mode in sodium fire scenarios. After that, the sodium fire scenario vulnerability assessment of primary cold trap room of China Experimental Fast Reactor was performed to illustrate the feasibility of the methodology. The calculation result of the example shows that the conditional failure probability of key cable is 23.6% in the sodium fire scenario which is caused by continuous sodium leakage because of the isolation device failure, but the wall temperature, the room pressure and the aerosol discharge mass are all lower than the safety limits.
Geng, Changran; Tang, Xiaobin; Gong, Chunhui; Guan, Fada; Johns, Jesse; Shu, Diyun; Chen, Da
2015-12-01
The active shielding technique has great potential for radiation protection in space exploration because it has the advantage of a significant mass saving compared with the passive shielding technique. This paper demonstrates a Monte Carlo-based approach to evaluating the shielding effectiveness of the active shielding technique using confined magnetic fields (CMFs). The International Commission on Radiological Protection reference anthropomorphic phantom, as well as the toroidal CMF, was modeled using the Monte Carlo toolkit Geant4. The penetrating primary particle fluence, organ-specific dose equivalent, and male effective dose were calculated for particles in galactic cosmic radiation (GCR) and solar particle events (SPEs). Results show that the SPE protons can be easily shielded against, even almost completely deflected, by the toroidal magnetic field. GCR particles can also be more effectively shielded against by increasing the magnetic field strength. Our results also show that the introduction of a structural Al wall in the CMF did not provide additional shielding for GCR; in fact it can weaken the total shielding effect of the CMF. This study demonstrated the feasibility of accurately determining the radiation field inside the environment and evaluating the organ dose equivalents for astronauts under active shielding using the CMF.
Monte Carlo simulation of charge mediated magnetoelectricity in multiferroic bilayers
Energy Technology Data Exchange (ETDEWEB)
Ortiz-Álvarez, H.H. [Universidad de Caldas, Manizales (Colombia); Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia); Bedoya-Hincapié, C.M. [Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia); Universidad Santo Tomás, Bogotá (Colombia); Restrepo-Parra, E., E-mail: erestrepopa@unal.edu.co [Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia)
2014-12-01
Simulations of a bilayer ferroelectric/ferromagnetic multiferroic system were carried out, based on the Monte Carlo method and Metropolis dynamics. A generic model was implemented with a Janssen-like Hamiltonian, taking into account magnetoelectric interactions due to charge accumulation at the interface. Two different magnetic exchange constants were considered for accumulation and depletion states. Several screening lengths were also included. Simulations exhibit considerable magnetoelectric effects not only at low temperature, but also at temperature near to the transition point of the ferromagnetic layer. The results match experimental observations for this kind of structure and mechanism.
Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas
2003-01-01
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T
2011-11-21
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.
Energy Technology Data Exchange (ETDEWEB)
Pavlou, Andrew T., E-mail: pavloa2@rpi.edu; Ji, Wei, E-mail: jiw2@rpi.edu
2016-06-15
Highlights: • Thermal scattering data are fit using linear least squares regression. • Mesh points are optimally selected from phonon frequency distributions. • New meshes give more accurate fits of thermal data than our previous work. • Coefficient data storage is significantly reduced compared to current methods. - Abstract: In a series of papers, we have introduced a new sampling method for Monte Carlo codes for the low-energy secondary scattering parameters that greatly reduces data storage requirements. The method is based on the temperature dependence of the energy transfer (beta) and squared momentum transfer (alpha) between a neutron and a target nuclide. Cumulative distribution functions (CDFs) in beta and alpha are constructed for a range of temperatures on a mesh of incident energies in the thermal range and temperature fits are created for beta and alpha at discrete CDF probability lines. The secondary energy and angle distributions generated from the fit coefficients showed good agreement with the standard Monte Carlo sampling. However, some discrepancies still existed because the CDF probability mesh values were selected uniformly and arbitrarily. In this paper, a physics-based approach for optimally selecting the CDF probability meshes for the on-the-fly sampling method is introduced, using bound carbon in graphite as the example nuclide. This approach is based on the structure of the phonon frequency distribution of thermal excitations. From the study, it was determined that low (<0.1) and high (>0.9) beta CDF probabilities are important to the structure of the beta probability density functions (PDFs) while very low (<1 × 10{sup −4}) alpha CDF probabilities are important to the structure of the alpha PDFs. The final meshes contain 200 probability values for both beta and alpha. This results in 14.5 MB of total data storage for the on-the-fly coefficients which are used for any temperature realization. This is a significant reduction in
Performance evaluation of Biograph PET/CT system based on Monte Carlo simulation
Wang, Bing; Gao, Fei; Liu, Hua-Feng
2010-10-01
Combined lutetium oxyorthosilicate (LSO) Biograph PET/CT is developed by Siemens Company and has been introduced into medical practice. There is no septa between the scintillator rings, the acquisition mode is full 3D mode. The PET components incorporate three rings of 48 detector blocks which comprises a 13×13 matrix of 4×4×20mm3 elements. The patient aperture is 70cm, the transversal field of view (FOV) is 58.5cm, and the axial field of view is 16.2cm. The CT components adopt 16 slices spiral CT scanner. The physical performance of this PET/CT scanner has been evaluated using Monte Carlo simulation method according to latest NEMA NU 2-2007 standard and the results have been compared with real experiment results. For PET part, in the center FOV the average transversal resolution is 3.67mm, the average axial resolution is 3.94mm, and the 3D-reconstructed scatter fraction is 31.7%. The sensitivities of the PET scanner are 4.21kcps/MBq and 4.26kcps/MBq at 0cm and 10cm off the center of the transversal FOV. The peak NEC is 95.6kcps at a concentration of 39.2kBq/ml. The spatial resolution of CT part is up to 1.12mm at 10mm off the center. The errors between simulated and real results are permitted.
Monte Carlo based water/medium stopping-power ratios for various ICRP and ICRU tissues.
Fernández-Varea, José M; Carrasco, Pablo; Panettieri, Vanessa; Brualla, Lorenzo
2007-11-07
Water/medium stopping-power ratios, s(w,m), have been calculated for several ICRP and ICRU tissues, namely adipose tissue, brain, cortical bone, liver, lung (deflated and inflated) and spongiosa. The considered clinical beams were 6 and 18 MV x-rays and the field size was 10 x 10 cm(2). Fluence distributions were scored at a depth of 10 cm using the Monte Carlo code PENELOPE. The collision stopping powers for the studied tissues were evaluated employing the formalism of ICRU Report 37 (1984 Stopping Powers for Electrons and Positrons (Bethesda, MD: ICRU)). The Bragg-Gray values of s(w,m) calculated with these ingredients range from about 0.98 (adipose tissue) to nearly 1.14 (cortical bone), displaying a rather small variation with beam quality. Excellent agreement, to within 0.1%, is found with stopping-power ratios reported by Siebers et al (2000a Phys. Med. Biol. 45 983-95) for cortical bone, inflated lung and spongiosa. In the case of cortical bone, s(w,m) changes approximately 2% when either ICRP or ICRU compositions are adopted, whereas the stopping-power ratios of lung, brain and adipose tissue are less sensitive to the selected composition. The mass density of lung also influences the calculated values of s(w,m), reducing them by around 1% (6 MV) and 2% (18 MV) when going from deflated to inflated lung.
Institute of Scientific and Technical Information of China (English)
ZHANG Jun; GUO Fan
2015-01-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system’s dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system’s dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
Kumar, A; Chauhan, S
2017-03-08
Obesity is one of the most provoking health burdens in the developed countries. One of the strategies to prevent obesity is the inhibition of pancreatic lipase enzyme. The aim of this study was to build QSAR models for natural lipase inhibitors by using the Monte Carlo method. The molecular structures were represented by the simplified molecular input line entry system (SMILES) notation and molecular graphs. Three sets - training, calibration and test set of three splits - were examined and validated. Statistical quality of all the described models was very good. The best QSAR model showed the following statistical parameters: r(2) = 0.864 and Q(2) = 0.836 for the test set and r(2) = 0.824 and Q(2) = 0.819 for the validation set. Structural attributes for increasing and decreasing the activity (expressed as pIC50) were also defined. Using defined structural attributes, the design of new potential lipase inhibitors is also presented. Additionally, a molecular docking study was performed for the determination of binding modes of designed molecules.
Živković, Jelena V; Trutić, Nataša V; Veselinović, Jovana B; Nikolić, Goran M; Veselinović, Aleksandar M
2015-09-01
The Monte Carlo method was used for QSAR modeling of maleimide derivatives as glycogen synthase kinase-3β inhibitors. The first QSAR model was developed for a series of 74 3-anilino-4-arylmaleimide derivatives. The second QSAR model was developed for a series of 177 maleimide derivatives. QSAR models were calculated with the representation of the molecular structure by the simplified molecular input-line entry system. Two splits have been examined: one split into the training and test set for the first QSAR model, and one split into the training, test and validation set for the second. The statistical quality of the developed model is very good. The calculated model for 3-anilino-4-arylmaleimide derivatives had following statistical parameters: r(2)=0.8617 for the training set; r(2)=0.8659, and r(m)(2)=0.7361 for the test set. The calculated model for maleimide derivatives had following statistical parameters: r(2)=0.9435, for the training, r(2)=0.9262 and r(m)(2)=0.8199 for the test and r(2)=0.8418, r(av)(m)(2)=0.7469 and ∆r(m)(2)=0.1476 for the validation set. Structural indicators considered as molecular fragments responsible for the increase and decrease in the inhibition activity have been defined. The computer-aided design of new potential glycogen synthase kinase-3β inhibitors has been presented by using defined structural alerts.
Lattice based Kinetic Monte Carlo Simulations of a complex chemical reaction network
Danielson, Thomas; Savara, Aditya; Hin, Celine
Lattice Kinetic Monte Carlo (KMC) simulations offer a powerful alternative to using ordinary differential equations for the simulation of complex chemical reaction networks. Lattice KMC provides the ability to account for local spatial configurations of species in the reaction network, resulting in a more detailed description of the reaction pathway. In KMC simulations with a large number of reactions, the range of transition probabilities can span many orders of magnitude, creating subsets of processes that occur more frequently or more rarely. Consequently, processes that have a high probability of occurring may be selected repeatedly without actually progressing the system (i.e. the forward and reverse process for the same reaction). In order to avoid the repeated occurrence of fast frivolous processes, it is necessary to throttle the transition probabilities in such a way that avoids altering the overall selectivity. Likewise, as the reaction progresses, new frequently occurring species and reactions may be introduced, making a dynamic throttling algorithm a necessity. We present a dynamic steady-state detection scheme with the goal of accurately throttling rate constants in order to optimize the KMC run time without compromising the selectivity of the reaction network. The algorithm has been applied to a large catalytic chemical reaction network, specifically that of methanol oxidative dehydrogenation, as well as additional pathways on CeO2(111) resulting in formaldehyde, CO, methanol, CO2, H2 and H2O as gas products.
Energy Technology Data Exchange (ETDEWEB)
Abdel-Khalik, Hany S. [North Carolina State Univ., Raleigh, NC (United States); Zhang, Qiong [North Carolina State Univ., Raleigh, NC (United States)
2014-05-20
The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10^{3} - 10^{5} times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.
Markov chain Monte Carlo based analysis of post-translationally modified VDAC gating kinetics.
Tewari, Shivendra G; Zhou, Yifan; Otto, Bradley J; Dash, Ranjan K; Kwok, Wai-Meng; Beard, Daniel A
2014-01-01
The voltage-dependent anion channel (VDAC) is the main conduit for permeation of solutes (including nucleotides and metabolites) of up to 5 kDa across the mitochondrial outer membrane (MOM). Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs). Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated, and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC) method. This developed method describes three distinct conducting states (open, half-open, and closed) of VDAC activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggest that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance.
Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods
Directory of Open Access Journals (Sweden)
Qian Liu
2015-01-01
Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.
Monte Carlo based unit commitment procedures for the deregulated market environment
Energy Technology Data Exchange (ETDEWEB)
Granelli, G.P.; Marannino, P.; Montagna, M.; Zanellini, F. [Universita di Pavia, Pavia (Italy). Dipartimento di Ingegneria Elettrica
2006-12-15
The unit commitment problem, originally conceived in the framework of short term operation of vertically integrated utilities, needs a thorough re-examination in the light of the ongoing transition towards the open electricity market environment. In this work the problem is re-formulated to adapt unit commitment to the viewpoint of a generation company (GENCO) which is no longer bound to satisfy its load, but is willing to maximize its profits. Moreover, with reference to the present day situation in many countries, the presence of a GENCO (the former monopolist) which is in the position of exerting the market power, requires a careful analysis to be carried out considering the different perspectives of a price taker and of the price maker GENCO. Unit commitment is thus shown to lead to a couple of distinct, yet slightly different problems. The unavoidable uncertainties in load profile and price behaviour over the time period of interest are also taken into account by means of a Monte Carlo simulation. Both the forecasted loads and prices are handled as random variables with a normal multivariate distribution. The correlation between the random input variables corresponding to successive hours of the day was considered by carrying out a statistical analysis of actual load and price data. The whole procedure was tested making use of reasonable approximations of the actual data of the thermal generation units available to come actual GENCOs operating in Italy. (author)
Dosimetric investigation of proton therapy on CT-based patient data using Monte Carlo simulation
Chongsan, T.; Liamsuwan, T.; Tangboonduangjit, P.
2016-03-01
The aim of radiotherapy is to deliver high radiation dose to the tumor with low radiation dose to healthy tissues. Protons have Bragg peaks that give high radiation dose to the tumor but low exit dose or dose tail. Therefore, proton therapy is promising for treating deep- seated tumors and tumors locating close to organs at risk. Moreover, the physical characteristic of protons is suitable for treating cancer in pediatric patients. This work developed a computational platform for calculating proton dose distribution using the Monte Carlo (MC) technique and patient's anatomical data. The studied case is a pediatric patient with a primary brain tumor. PHITS will be used for MC simulation. Therefore, patient-specific CT-DICOM files were converted to the PHITS input. A MATLAB optimization program was developed to create a beam delivery control file for this study. The optimization program requires the proton beam data. All these data were calculated in this work using analytical formulas and the calculation accuracy was tested, before the beam delivery control file is used for MC simulation. This study will be useful for researchers aiming to investigate proton dose distribution in patients but do not have access to proton therapy machines.
Comparison of polynomial approximations to speed up planewave-based quantum Monte Carlo calculations
Parker, William D; Alfè, Dario; Hennig, Richard G; Wilkins, John W
2013-01-01
The computational cost of quantum Monte Carlo (QMC) calculations of realistic periodic systems depends strongly on the method of storing and evaluating the many-particle wave function. Previous work [A. J. Williamson et al., Phys. Rev. Lett. 87, 246406 (2001); D. Alf\\`e and M. J. Gillan, Phys. Rev. B 70, 161101 (2004)] has demonstrated the reduction of the O(N^3) cost of evaluating the Slater determinant with planewaves to O(N^2) using localized basis functions. We compare four polynomial approximations as basis functions -- interpolating Lagrange polynomials, interpolating piecewise-polynomial-form (pp-) splines, and basis-form (B-) splines (interpolating and smoothing). All these basis functions provide a similar speedup relative to the planewave basis. The pp-splines have eight times the memory requirement of the other methods. To test the accuracy of the basis functions, we apply them to the ground state structures of Si, Al, and MgO. The polynomial approximations differ in accuracy most strongly for MgO ...
Albin, T.; Koschny, D.; Soja, R.; Srama, R.; Poppe, B.
2016-01-01
The Canary Islands Long-Baseline Observatory (CILBO) is a double station meteor camera system (Koschny et al., 2013; Koschny et al., 2014) that consists of 5 cameras. The two cameras considered in this report are ICC7 and ICC9, and are installed on Tenerife and La Palma. They point to the same atmospheric volume between both islands allowing stereoscopic observation of meteors. Since its installation in 2011 and the start of operation in 2012 CILBO has detected over 15000 simultaneously observed meteors. Koschny and Diaz (2002) developed the Meteor Orbit and Trajectory Software (MOTS) to compute the trajectory of such meteors. The software uses the astrometric data from the detection software MetRec (Molau, 1998) and determines the trajectory in geodetic coordinates. This work presents a Monte-Carlo based extension of the MOTS code to compute the orbital elements of simultaneously detected meteors by CILBO.
Ye, Hong-zhou; Jiang, Hong
2014-01-01
Materials with spin-crossover (SCO) properties hold great potentials in information storage and therefore have received a lot of concerns in the recent decades. The hysteresis phenomena accompanying SCO is attributed to the intermolecular cooperativity whose underlying mechanism may have a vibronic origin. In this work, a new vibronic Ising-like model in which the elastic coupling between SCO centers is included by considering harmonic stretching and bending (SAB) interactions is proposed and solved by Monte Carlo simulations. The key parameters in the new model, $k_1$ and $k_2$, corresponding to the elastic constant of the stretching and bending mode, respectively, can be directly related to the macroscopic bulk and shear modulus of the material in study, which can be readily estimated either based on experimental measurements or first-principles calculations. The convergence issue in the MC simulations of the thermal hysteresis has been carefully checked, and it was found that the stable hysteresis loop can...
Tseung, H Wan Chan; Beltran, C
2014-01-01
Purpose: Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on GPUs. However, these usually use simplified models for non-elastic (NE) proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and NE collisions. Methods: Using CUDA, we implemented GPU kernels for these tasks: (1) Simulation of spots from our scanning nozzle configurations, (2) Proton propagation through CT geometry, considering nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) Modeling of the intranuclear cascade stage of NE interactions, (4) Nuclear evaporation simulation, and (5) Statistical error estimates on the dose. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions, (2) Dose calculations in homogeneous phantoms, (3) Re-calculations of head and neck plans from a commercial treatment planning system (TPS), and compared with Geant4.9.6p2/TOPAS. Results: Yields, en...
Monte Carlo based approach to the LS–NaI 4πβ–γ anticoincidence extrapolation and uncertainty.
Fitzgerald, R
2016-03-01
The 4πβ–γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone.
Bznuni, S A; Zhamkochyan, V M; Polanski, A; Sosnin, A N; Khudaverdyan, A H
2001-01-01
Parameters of a subcritical cascade reactor driven by a proton accelerator and based on a primary lead-bismuth target, main reactor constructed analogously to the molten salt breeder (MSBR) reactor core and a booster-reactor analogous to the core of the BN-350 liquid metal cooled fast breeder reactor (LMFBR). It is shown by means of Monte-Carlo modeling that the reactor under study provides safe operation modes (k_{eff}=0.94-0.98), is apable to transmute effectively radioactive nuclear waste and reduces by an order of magnitude the requirements on the accelerator beam current. Calculations show that the maximal neutron flux in the thermal zone is 10^{14} cm^{12}\\cdot s^_{-1}, in the fast booster zone is 5.12\\cdot10^{15} cm^{12}\\cdot s{-1} at k_{eff}=0.98 and proton beam current I=2.1 mA.
Gong, Y.; Yu, Y. J.; Zhang, W. Y.
2016-08-01
This study has established a set of methodological systems by simulating loads and analyzing optimization strategy integrity for the optimization of watershed non-point source pollution control. First, the source of watershed agricultural non-point source pollution is divided into four aspects, including agricultural land, natural land, livestock breeding, and rural residential land. Secondly, different pollution control measures at the source, midway and ending stages are chosen. Thirdly, the optimization effect of pollution load control in three stages are simulated, based on the Monte Carlo simulation. The method described above is applied to the Ashi River watershed in Heilongjiang Province of China. Case study results indicate that the combined three types of control measures can be implemented only if the government promotes the optimized plan and gradually improves implementation efficiency. This method for the optimization strategy integrity for watershed non-point source pollution control has significant reference value.
Townson, Reid W.; Zavgorodni, Sergei
2014-12-01
In GPU-based Monte Carlo simulations for radiotherapy dose calculation, source modelling from a phase-space source can be an efficiency bottleneck. Previously, this has been addressed using phase-space-let (PSL) sources, which provided significant efficiency enhancement. We propose that additional speed-up can be achieved through the use of a hybrid primary photon point source model combined with a secondary PSL source. A novel phase-space derived and histogram-based implementation of this model has been integrated into gDPM v3.0. Additionally, a simple method for approximately deriving target photon source characteristics from a phase-space that does not contain inheritable particle history variables (LATCH) has been demonstrated to succeed in selecting over 99% of the true target photons with only ~0.3% contamination (for a Varian 21EX 18 MV machine). The hybrid source model was tested using an array of open fields for various Varian 21EX and TrueBeam energies, and all cases achieved greater than 97% chi-test agreement (the mean was 99%) above the 2% isodose with 1% / 1 mm criteria. The root mean square deviations (RMSDs) were less than 1%, with a mean of 0.5%, and the source generation time was 4-5 times faster. A seven-field intensity modulated radiation therapy patient treatment achieved 95% chi-test agreement above the 10% isodose with 1% / 1 mm criteria, 99.8% for 2% / 2 mm, a RMSD of 0.8%, and source generation speed-up factor of 2.5. Presented as part of the International Workshop on Monte Carlo Techniques in Medical Physics
Performance Analysis of Korean Liquid metal type TBM based on Monte Carlo code
Energy Technology Data Exchange (ETDEWEB)
Kim, C. H.; Han, B. S.; Park, H. J.; Park, D. K. [Seoul National Univ., Seoul (Korea, Republic of)
2007-01-15
The objective of this project is to analyze a nuclear performance of the Korean HCML(Helium Cooled Molten Lithium) TBM(Test Blanket Module) which will be installed in ITER(International Thermonuclear Experimental Reactor). This project is intended to analyze a neutronic design and nuclear performances of the Korean HCML ITER TBM through the transport calculation of MCCARD. In detail, we will conduct numerical experiments for analyzing the neutronic design of the Korean HCML TBM and the DEMO fusion blanket, and improving the nuclear performances. The results of the numerical experiments performed in this project will be utilized further for a design optimization of the Korean HCML TBM. In this project, Monte Carlo transport calculations for evaluating TBR (Tritium Breeding Ratio) and EMF (Energy Multiplication factor) were conducted to analyze a nuclear performance of the Korean HCML TBM. The activation characteristics and shielding performances for the Korean HCML TBM were analyzed using ORIGEN and MCCARD. We proposed the neutronic methodologies for analyzing the nuclear characteristics of the fusion blanket, which was applied to the blanket analysis of a DEMO fusion reactor. In the results, the TBR of the Korean HCML ITER TBM is 0.1352 and the EMF is 1.362. Taking into account a limitation for the Li amount in ITER TBM, it is expected that tritium self-sufficiency condition can be satisfied through a change of the Li quantity and enrichment. In the results of activation and shielding analysis, the activity drops to 1.5% of the initial value and the decay heat drops to 0.02% of the initial amount after 10 years from plasma shutdown.
Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model
Prakash, Shashi; Kumar, Nitish; Kumar, Subrata
2016-09-01
CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.
Graves, Yan Jiang; Jia, Xun; Jiang, Steve B
2013-03-21
The γ-index test has been commonly adopted to quantify the degree of agreement between a reference dose distribution and an evaluation dose distribution. Monte Carlo (MC) simulation has been widely used for the radiotherapy dose calculation for both clinical and research purposes. The goal of this work is to investigate both theoretically and experimentally the impact of the MC statistical fluctuation on the γ-index test when the fluctuation exists in the reference, the evaluation, or both dose distributions. To the first order approximation, we theoretically demonstrated in a simplified model that the statistical fluctuation tends to overestimate γ-index values when existing in the reference dose distribution and underestimate γ-index values when existing in the evaluation dose distribution given the original γ-index is relatively large for the statistical fluctuation. Our numerical experiments using realistic clinical photon radiation therapy cases have shown that (1) when performing a γ-index test between an MC reference dose and a non-MC evaluation dose, the average γ-index is overestimated and the gamma passing rate decreases with the increase of the statistical noise level in the reference dose; (2) when performing a γ-index test between a non-MC reference dose and an MC evaluation dose, the average γ-index is underestimated when they are within the clinically relevant range and the gamma passing rate increases with the increase of the statistical noise level in the evaluation dose; (3) when performing a γ-index test between an MC reference dose and an MC evaluation dose, the gamma passing rate is overestimated due to the statistical noise in the evaluation dose and underestimated due to the statistical noise in the reference dose. We conclude that the γ-index test should be used with caution when comparing dose distributions computed with MC simulation.
Rizzo, Robert C; Udier-Blagović, Marina; Wang, De-Ping; Watkins, Edward K; Kroeger Smith, Marilyn B; Smith, Richard H; Tirado-Rives, Julian; Jorgensen, William L
2002-07-04
Results of Monte Carlo (MC) simulations for more than 200 nonnucleoside inhibitors of HIV-1 reverse transcriptase (NNRTIs) representing eight diverse chemotypes have been correlated with their anti-HIV activities in an effort to establish simulation protocols and methods that can be used in the development of more effective drugs. Each inhibitor was modeled in a complex with the protein and by itself in water, and potentially useful descriptors of binding affinity were collected during the MC simulations. A viable regression equation was obtained for each data set using an extended linear response approach, which yielded r(2) values between 0.54 and 0.85 and an average unsigned error of only 0.50 kcal/mol. The most common descriptors confirm that a good geometrical match between the inhibitor and the protein is important and that the net loss of hydrogen bonds with the inhibitor upon binding is unfavorable. Other physically reasonable descriptors of binding are needed on a chemotype case-by-case basis. By including descriptors in common from the individual fits, combination regressions that include multiple data sets were also developed. This procedure led to a refined "master" regression for 210 NNRTIs with an r(2) of 0.60 and a cross-validated q(2) of 0.55. The computed activities show an rms error of 0.86 kcal/mol in comparison with experiment and an average unsigned error of 0.69 kcal/mol. Encouraging results were obtained for the predictions of 27 NNRTIs, representing a new chemotype not included in the development of the regression model. Predictions for this test set using the master regression yielded a q(2) value of 0.51 and an average unsigned error of 0.67 kcal/mol. Finally, additional regression analysis reveals that use of ligand-only descriptors leads to models with much diminished predictive ability.
GPU-based Monte Carlo Dust Radiative Transfer Scheme Applied to Active Galactic Nuclei
Heymann, Frank; Siebenmorgen, Ralf
2012-05-01
A three-dimensional parallel Monte Carlo (MC) dust radiative transfer code is presented. To overcome the huge computing-time requirements of MC treatments, the computational power of vectorized hardware is used, utilizing either multi-core computer power or graphics processing units. The approach is a self-consistent way to solve the radiative transfer equation in arbitrary dust configurations. The code calculates the equilibrium temperatures of two populations of large grains and stochastic heated polycyclic aromatic hydrocarbons. Anisotropic scattering is treated applying the Heney-Greenstein phase function. The spectral energy distribution (SED) of the object is derived at low spatial resolution by a photon counting procedure and at high spatial resolution by a vectorized ray tracer. The latter allows computation of high signal-to-noise images of the objects at any frequencies and arbitrary viewing angles. We test the robustness of our approach against other radiative transfer codes. The SED and dust temperatures of one- and two-dimensional benchmarks are reproduced at high precision. The parallelization capability of various MC algorithms is analyzed and included in our treatment. We utilize the Lucy algorithm for the optical thin case where the Poisson noise is high, the iteration-free Bjorkman & Wood method to reduce the calculation time, and the Fleck & Canfield diffusion approximation for extreme optical thick cells. The code is applied to model the appearance of active galactic nuclei (AGNs) at optical and infrared wavelengths. The AGN torus is clumpy and includes fluffy composite grains of various sizes made up of silicates and carbon. The dependence of the SED on the number of clumps in the torus and the viewing angle is studied. The appearance of the 10 μm silicate features in absorption or emission is discussed. The SED of the radio-loud quasar 3C 249.1 is fit by the AGN model and a cirrus component to account for the far-infrared emission.
Messina, Luca; Castin, Nicolas; Domain, Christophe; Olsson, Pär
2017-02-01
The quality of kinetic Monte Carlo (KMC) simulations of microstructure evolution in alloys relies on the parametrization of point-defect migration rates, which are complex functions of the local chemical composition and can be calculated accurately with ab initio methods. However, constructing reliable models that ensure the best possible transfer of physical information from ab initio to KMC is a challenging task. This work presents an innovative approach, where the transition rates are predicted by artificial neural networks trained on a database of 2000 migration barriers, obtained with density functional theory (DFT) in place of interatomic potentials. The method is tested on copper precipitation in thermally aged iron alloys, by means of a hybrid atomistic-object KMC model. For the object part of the model, the stability and mobility properties of copper-vacancy clusters are analyzed by means of independent atomistic KMC simulations, driven by the same neural networks. The cluster diffusion coefficients and mean free paths are found to increase with size, confirming the dominant role of coarsening of medium- and large-sized clusters in the precipitation kinetics. The evolution under thermal aging is in better agreement with experiments with respect to a previous interatomic-potential model, especially concerning the experiment time scales. However, the model underestimates the solubility of copper in iron due to the excessively high solution energy predicted by the chosen DFT method. Nevertheless, this work proves the capability of neural networks to transfer complex ab initio physical properties to higher-scale models, and facilitates the extension to systems with increasing chemical complexity, setting the ground for reliable microstructure evolution simulations in a wide range of alloys and applications.
Ross, M. N.; Toohey, D.
2008-12-01
Emissions from solid and liquid propellant rocket engines reduce global stratospheric ozone levels. Currently ~ one kiloton of payloads are launched into earth orbit annually by the global space industry. Stratospheric ozone depletion from present day launches is a small fraction of the ~ 4% globally averaged ozone loss caused by halogen gases. Thus rocket engine emissions are currently considered a minor, if poorly understood, contributor to ozone depletion. Proposed space-based geoengineering projects designed to mitigate climate change would require order of magnitude increases in the amount of material launched into earth orbit. The increased launches would result in comparable increases in the global ozone depletion caused by rocket emissions. We estimate global ozone loss caused by three space-based geoengineering proposals to mitigate climate change: (1) mirrors, (2) sunshade, and (3) space-based solar power (SSP). The SSP concept does not directly engineer climate, but is touted as a mitigation strategy in that SSP would reduce CO2 emissions. We show that launching the mirrors or sunshade would cause global ozone loss between 2% and 20%. Ozone loss associated with an economically viable SSP system would be at least 0.4% and possibly as large as 3%. It is not clear which, if any, of these levels of ozone loss would be acceptable under the Montreal Protocol. The large uncertainties are mainly caused by a lack of data or validated models regarding liquid propellant rocket engine emissions. Our results offer four main conclusions. (1) The viability of space-based geoengineering schemes could well be undermined by the relatively large ozone depletion that would be caused by the required rocket launches. (2) Analysis of space- based geoengineering schemes should include the difficult tradeoff between the gain of long-term (~ decades) climate control and the loss of short-term (~ years) deep ozone loss. (3) The trade can be properly evaluated only if our
Monte Carlo-based diode design for correction-less small field dosimetry
Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R. T.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.
2013-07-01
Due to their small collecting volume, diodes are commonly used in small field dosimetry. However, the relative sensitivity of a diode increases with decreasing small field size. Conversely, small air gaps have been shown to cause a significant decrease in the sensitivity of a detector as the field size is decreased. Therefore, this study uses Monte Carlo simulations to look at introducing air upstream to diodes such that they measure with a constant sensitivity across all field sizes in small field dosimetry. Varying thicknesses of air were introduced onto the upstream end of two commercial diodes (PTW 60016 photon diode and PTW 60017 electron diode), as well as a theoretical unenclosed silicon chip using field sizes as small as 5 mm × 5 mm. The metric \\frac{{D_{w,Q} }}{{D_{Det,Q} }} used in this study represents the ratio of the dose to a point of water to the dose to the diode active volume, for a particular field size and location. The optimal thickness of air required to provide a constant sensitivity across all small field sizes was found by plotting \\frac{{D_{w,Q} }}{{D_{Det,Q} }} as a function of introduced air gap size for various field sizes, and finding the intersection point of these plots. That is, the point at which \\frac{{D_{w,Q} }}{{D_{Det,Q} }} was constant for all field sizes was found. The optimal thickness of air was calculated to be 3.3, 1.15 and 0.10 mm for the photon diode, electron diode and unenclosed silicon chip, respectively. The variation in these results was due to the different design of each detector. When calculated with the new diode design incorporating the upstream air gap, k_{Q_{clin} ,Q_{msr} }^{f_{clin} ,f_{msr} } was equal to unity to within statistical uncertainty (0.5%) for all three diodes. Cross-axis profile measurements were also improved with the new detector design. The upstream air gap could be implanted on the commercial diodes via a cap consisting of the air cavity surrounded by water equivalent material. The
Evaluation of a commercial electron treatment planning system based on Monte Carlo techniques (eMC).
Pemler, Peter; Besserer, Jürgen; Schneider, Uwe; Neuenschwander, Hans
2006-01-01
A commercial electron beam treatment planning system on the basis of a Monte Carlo algorithm (Varian Eclipse, eMC V7.2.35) was evaluated. Measured dose distributions were used for comparison with dose distributions predicted by eMC calculations. Tests were carried out for various applicators and field sizes, irregular shaped cut outs and an inhomogeneity phantom for energies between 6 Me V and 22 MeV Monitor units were calculated for all applicator/energy combinations and field sizes down to 3 cm diameter and source-to-surface distances of 100 cm and 110 cm. A mass-density-to-Hounsfield-Units calibration was performed to compare dose distributions calculated with a default and an individual calibration. The relationship between calculation parameters of the eMC and the resulting dose distribution was studied in detail. Finally, the algorithm was also applied to a clinical case (boost treatment of the breast) to reveal possible problems in the implementation. For standard geometries there was a good agreement between measurements and calculations, except for profiles for low energies (6 MeV) and high energies (18 Me V 22 MeV), in which cases the algorithm overestimated the dose off-axis in the high-dose region. For energies of 12 MeV and higher there were oscillations in the plateau region of the corresponding depth dose curves calculated with a grid size of 1 mm. With irregular cut outs, an overestimation of the dose was observed for small slits and low energies (4% for 6 MeV), as well as for asymmetric cases and extended source-to-surface distances (12% for SSD = 120 cm). While all monitor unit calculations for SSD = 100 cm were within 3% compared to measure-ments, there were large deviations for small cut outs and source-to-surface distances larger than 100 cm (7%for a 3 cm diameter cut-out and a source-to-surface distance of 10 cm).
DEFF Research Database (Denmark)
Klösgen, Beate; Bruun, Sara; Hansen, Søren;
The presence of a depletion layer of water along extended hydrophobic interfaces, and a possibly related formation of nanobubbles, is an ongoing discussion. The phenomenon was initially reported when we, years ago, chose thick films (~300-400Å) of polystyrene as cushions between a crystalline...... giving rise to depletion layers, and the mechanisms and border conditions that control their presence and extension require still clarification. Recently, careful systematic reflectivity experiments were re-done on the same system. No depletion layers were found, and it was conjectured that the whole...
Jin, Shengye; Tamura, Masayuki
2013-10-01
Monte Carlo Ray Tracing (MCRT) method is a versatile application for simulating radiative transfer regime of the Solar - Atmosphere - Landscape system. Moreover, it can be used to compute the radiation distribution over a complex landscape configuration, as an example like a forest area. Due to its robustness to the complexity of the 3-D scene altering, MCRT method is also employed for simulating canopy radiative transfer regime as the validation source of other radiative transfer models. In MCRT modeling within vegetation, one basic step is the canopy scene set up. 3-D scanning application was used for representing canopy structure as accurately as possible, but it is time consuming. Botanical growth function can be used to model the single tree growth, but cannot be used to express the impaction among trees. L-System is also a functional controlled tree growth simulation model, but it costs large computing memory. Additionally, it only models the current tree patterns rather than tree growth during we simulate the radiative transfer regime. Therefore, it is much more constructive to use regular solid pattern like ellipsoidal, cone, cylinder etc. to indicate single canopy. Considering the allelopathy phenomenon in some open forest optical images, each tree in its own `domain' repels other trees. According to this assumption a stochastic circle packing algorithm is developed to generate the 3-D canopy scene in this study. The canopy coverage (%) and the tree amount (N) of the 3-D scene are declared at first, similar to the random open forest image. Accordingly, we randomly generate each canopy radius (rc). Then we set the circle central coordinate on XY-plane as well as to keep circles separate from each other by the circle packing algorithm. To model the individual tree, we employ the Ishikawa's tree growth regressive model to set the tree parameters including DBH (dt), tree height (H). However, the relationship between canopy height (Hc) and trunk height (Ht) is
Directory of Open Access Journals (Sweden)
Tuija Kangasmaa
2012-01-01
Full Text Available Simultaneous Tl-201/Tc-99m dual isotope myocardial perfusion SPECT is seriously hampered by down-scatter from Tc-99m into the Tl-201 energy window. This paper presents and optimises the ordered-subsets-expectation-maximisation-(OS-EM- based reconstruction algorithm, which corrects the down-scatter using an efficient Monte Carlo (MC simulator. The algorithm starts by first reconstructing the Tc-99m image with attenuation, collimator response, and MC-based scatter correction. The reconstructed Tc-99m image is then used as an input for an efficient MC-based down-scatter simulation of Tc-99m photons into the Tl-201 window. This down-scatter estimate is finally used in the Tl-201 reconstruction to correct the crosstalk between the two isotopes. The mathematical 4D NCAT phantom and physical cardiac phantoms were used to optimise the number of OS-EM iterations where the scatter estimate is updated and the number of MC simulated photons. The results showed that two scatter update iterations and 105 simulated photons are enough for the Tc-99m and Tl-201 reconstructions, whereas 106 simulated photons are needed to generate good quality down-scatter estimates. With these parameters, the entire Tl-201/Tc-99m dual isotope reconstruction can be accomplished in less than 3 minutes.
Kangasmaa, Tuija; Kuikka, Jyrki; Sohlberg, Antti
2012-01-01
Simultaneous Tl-201/Tc-99m dual isotope myocardial perfusion SPECT is seriously hampered by down-scatter from Tc-99m into the Tl-201 energy window. This paper presents and optimises the ordered-subsets-expectation-maximisation-(OS-EM-) based reconstruction algorithm, which corrects the down-scatter using an efficient Monte Carlo (MC) simulator. The algorithm starts by first reconstructing the Tc-99m image with attenuation, collimator response, and MC-based scatter correction. The reconstructed Tc-99m image is then used as an input for an efficient MC-based down-scatter simulation of Tc-99m photons into the Tl-201 window. This down-scatter estimate is finally used in the Tl-201 reconstruction to correct the crosstalk between the two isotopes. The mathematical 4D NCAT phantom and physical cardiac phantoms were used to optimise the number of OS-EM iterations where the scatter estimate is updated and the number of MC simulated photons. The results showed that two scatter update iterations and 10(5) simulated photons are enough for the Tc-99m and Tl-201 reconstructions, whereas 10(6) simulated photons are needed to generate good quality down-scatter estimates. With these parameters, the entire Tl-201/Tc-99m dual isotope reconstruction can be accomplished in less than 3 minutes.
An OpenCL-based Monte Carlo dose calculation engine (oclMC) for coupled photon-electron transport
Tian, Zhen; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun
2015-01-01
Monte Carlo (MC) method has been recognized the most accurate dose calculation method for radiotherapy. However, its extremely long computation time impedes clinical applications. Recently, a lot of efforts have been made to realize fast MC dose calculation on GPUs. Nonetheless, most of the GPU-based MC dose engines were developed in NVidia CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a fast cross-platform MC dose engine oclMC using OpenCL environment for external beam photon and electron radiotherapy in MeV energy range. Coupled photon-electron MC simulation was implemented with analogue simulations for photon transports and a Class II condensed history scheme for electron transports. To test the accuracy and efficiency of our dose engine oclMC, we compared dose calculation results of oclMC and gDPM, our previously developed GPU-based MC code, for a 15 MeV electron ...
Pan, J.; Durand, M. T.; Vanderjagt, B. J.
2015-12-01
Markov Chain Monte Carlo (MCMC) method is a retrieval algorithm based on Bayes' rule, which starts from an initial state of snow/soil parameters, and updates it to a series of new states by comparing the posterior probability of simulated snow microwave signals before and after each time of random walk. It is a realization of the Bayes' rule, which gives an approximation to the probability of the snow/soil parameters in condition of the measured microwave TB signals at different bands. Although this method could solve all snow parameters including depth, density, snow grain size and temperature at the same time, it still needs prior information of these parameters for posterior probability calculation. How the priors will influence the SWE retrieval is a big concern. Therefore, in this paper at first, a sensitivity test will be carried out to study how accurate the snow emission models and how explicit the snow priors need to be to maintain the SWE error within certain amount. The synthetic TB simulated from the measured snow properties plus a 2-K observation error will be used for this purpose. It aims to provide a guidance on the MCMC application under different circumstances. Later, the method will be used for the snowpits at different sites, including Sodankyla, Finland, Churchill, Canada and Colorado, USA, using the measured TB from ground-based radiometers at different bands. Based on the previous work, the error in these practical cases will be studied, and the error sources will be separated and quantified.
Lysak, Y. V.; Klimanov, V. A.; Narkevich, B. Ya
2017-01-01
One of the most difficult problems of modern radionuclide therapy (RNT) is control of the absorbed dose in pathological volume. This research presents new approach based on estimation of radiopharmaceutical (RP) accumulated activity value in tumor volume, based on planar scintigraphic images of the patient and calculated radiation transport using Monte Carlo method, including absorption and scattering in biological tissues of the patient, and elements of gamma camera itself. In our research, to obtain the data, we performed modeling scintigraphy of the vial with administered to the patient activity of RP in gamma camera, the vial was placed at the certain distance from the collimator, and the similar study was performed in identical geometry, with the same values of activity of radiopharmaceuticals in the pathological target in the body of the patient. For correct calculation results, adapted Fisher-Snyder human phantom was simulated in MCNP program. In the context of our technique, calculations were performed for different sizes of pathological targets and various tumors deeps inside patient’s body, using radiopharmaceuticals based on a mixed β-γ-radiating (131I, 177Lu), and clear β- emitting (89Sr, 90Y) therapeutic radionuclides. Presented method can be used for adequate implementing in clinical practice estimation of absorbed doses in the regions of interest on the basis of planar scintigraphy of the patient with sufficient accuracy.
CRDIAC: Coupled Reactor Depletion Instrument with Automated Control
Energy Technology Data Exchange (ETDEWEB)
Steven K. Logan
2012-08-01
When modeling the behavior of a nuclear reactor over time, it is important to understand how the isotopes in the reactor will change, or transmute, over that time. This is especially important in the reactor fuel itself. Many nuclear physics modeling codes model how particles interact in the system, but do not model this over time. Thus, another code is used in conjunction with the nuclear physics code to accomplish this. In our code, Monte Carlo N-Particle (MCNP) codes and the Multi Reactor Transmutation Analysis Utility (MRTAU) were chosen as the codes to use. In this way, MCNP would produce the reaction rates in the different isotopes present and MRTAU would use cross sections generated from these reaction rates to determine how the mass of each isotope is lost or gained. Between these two codes, the information must be altered and edited for use. For this, a Python 2.7 script was developed to aid the user in getting the information in the correct forms. This newly developed methodology was called the Coupled Reactor Depletion Instrument with Automated Controls (CRDIAC). As is the case in any newly developed methodology for modeling of physical phenomena, CRDIAC needed to be verified against similar methodology and validated against data taken from an experiment, in our case AFIP-3. AFIP-3 was a reduced enrichment plate type fuel tested in the ATR. We verified our methodology against the MCNP Coupled with ORIGEN2 (MCWO) method and validated our work against the Post Irradiation Examination (PIE) data. When compared to MCWO, the difference in concentration of U-235 throughout Cycle 144A was about 1%. When compared to the PIE data, the average bias for end of life U-235 concentration was about 2%. These results from CRDIAC therefore agree with the MCWO and PIE data, validating and verifying CRDIAC. CRDIAC provides an alternative to using ORIGEN-based methodology, which is useful because CRDIAC's depletion code, MRTAU, uses every available isotope in its
Energy Technology Data Exchange (ETDEWEB)
Adams, M.B. [United States Dept. of Agriculture Forest Service, Parsons, WV (United States); Burger, J.A. [Virginia Tech University, Blacks Burg, VA (United States)
2010-07-01
This study assessed the hypothesis that soil based cation depletion is an effect of acidic deposition in forests located in the central Appalachians. The effects of experimentally induced base cation depletion were evaluated in relation to long-term soil productivity and the sustainability of forest stands. Whole-tree harvesting was conducted along with the removal of dead wood litter in order to remove all aboveground nutrients. Ammonium sulfate fertilizer was added at annual rates of 40.6 kg S/ha and 35.4 kg N/h in order to increase the leaching of calcium (Ca) and magnesium (Mg) from the soil. A randomized complete block design was used in 4 or 5 treatment applications in a mixed hardwood experimental forest located in West Virginia and in a cherry-maple forest located in a national forest in West Virginia. Soils were sampled over a 10-year period. The study showed that significant changes in soil Mg, N and some other nutrients occurred over time. However, biomass did not differ significantly among the different treatment options used.
Addressing Ozone Layer Depletion
Access information on EPA's efforts to address ozone layer depletion through regulations, collaborations with stakeholders, international treaties, partnerships with the private sector, and enforcement actions under Title VI of the Clean Air Act.
Reply to "Comment on 'A study on tetrahedron-based inhomogeneous Monte-Carlo optical simulation'".
Shen, Haiou; Wang, Ge
2011-04-19
We compare the accuracy of TIM-OS and MMCM in response to the recent analysis made by Fang [Biomed. Opt. Express 2, 1258 (2011)]. Our results show that the tetrahedron-based energy deposition algorithm used in TIM-OS is more accurate than the node-based energy deposition algorithm used in MMCM.
Multi-period mean–variance portfolio optimization based on Monte-Carlo simulation
Cong, F.; Oosterlee, C.W.
2016-01-01
We propose a simulation-based approach for solving the constrained dynamic mean– variance portfolio managemen tproblem. For this dynamic optimization problem, we first consider a sub-optimal strategy, called the multi-stage strategy, which can be utilized in a forward fashion. Then, based on this fa
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Demidov, A.; Eschlböck-Fuchs, S.; Kazakov, A. Ya.; Gornushkin, I. B.; Kolmhofer, P. J.; Pedarnig, J. D.; Huber, N.; Heitz, J.; Schmid, T.; Rössler, R.; Panne, U.
2016-11-01
The improved Monte-Carlo (MC) method for standard-less analysis in laser induced breakdown spectroscopy (LIBS) is presented. Concentrations in MC LIBS are found by fitting model-generated synthetic spectra to experimental spectra. The current version of MC LIBS is based on the graphic processing unit (GPU) computation and reduces the analysis time down to several seconds per spectrum/sample. The previous version of MC LIBS which was based on the central processing unit (CPU) computation requested unacceptably long analysis times of 10's minutes per spectrum/sample. The reduction of the computational time is achieved through the massively parallel computing on the GPU which embeds thousands of co-processors. It is shown that the number of iterations on the GPU exceeds that on the CPU by a factor > 1000 for the 5-dimentional parameter space and yet requires > 10-fold shorter computational time. The improved GPU-MC LIBS outperforms the CPU-MS LIBS in terms of accuracy, precision, and analysis time. The performance is tested on LIBS-spectra obtained from pelletized powders of metal oxides consisting of CaO, Fe2O3, MgO, and TiO2 that simulated by-products of steel industry, steel slags. It is demonstrated that GPU-based MC LIBS is capable of rapid multi-element analysis with relative error between 1 and 10's percent that is sufficient for industrial applications (e.g. steel slag analysis). The results of the improved GPU-based MC LIBS are positively compared to that of the CPU-based MC LIBS as well as to the results of the standard calibration-free (CF) LIBS based on the Boltzmann plot method.
Shypailo, R. J.; Ellis, K. J.
2011-05-01
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of 40K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Simakov, S P; Moellendorff, U V; Schmuck, I; Konobeev, A Y; Korovin, Y A; Pereslavtsev, P
2002-01-01
A newly developed computational procedure is presented for the generation of d-Li source neutrons in Monte Carlo transport calculations based on the use of evaluated double-differential d+ sup 6 sup , sup 7 Li cross section data. A new code M sup c DeLicious was developed as an extension to MCNP4C to enable neutronics design calculations for the d-Li based IFMIF neutron source making use of the evaluated deuteron data files. The M sup c DeLicious code was checked against available experimental data and calculation results of M sup c DeLi and MCNPX, both of which use built-in analytical models for the Li(d, xn) reaction. It is shown that M sup c DeLicious along with newly evaluated d+ sup 6 sup , sup 7 Li data is superior in predicting the characteristics of the d-Li neutron source. As this approach makes use of tabulated Li(d, xn) cross sections, the accuracy of the IFMIF d-Li neutron source term can be steadily improved with more advanced and validated data.
Stamenkovic, Dragan D.; Popovic, Vladimir M.
2015-02-01
Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.
Bruser, Christoph; Strutz, Nils; Winter, Stefan; Leonhardt, Steffen; Walter, Marian
2015-06-01
Unobtrusive, long-term monitoring of cardiac (and respiratory) rhythms using only non-invasive vibration sensors mounted in beds promises to unlock new applications in home and low acuity monitoring. This paper presents a novel concept for such a system based on an array of near infrared (NIR) sensors placed underneath a regular bed mattress. We focus on modeling and analyzing the underlying technical measurement principle with the help of a 2D model of a polyurethane foam mattress and Monte-Carlo simulations of the opto-mechanical interaction responsible for signal genesis. Furthermore, a test rig to automatically and repeatably impress mechanical vibrations onto a mattress is introduced and used to identify the properties of a prototype implementation of the proposed measurement principle. Results show that NIR-based sensing is capable of registering miniscule deformations of the mattress with a high spatial specificity. As a final outlook, proof-of-concept measurements with the sensor array are presented which demonstrate that cardiorespiratory movements of the body can be detected and that automatic heart rate estimation at competitive error levels is feasible with the proposed approach.
Quan, Guotao; Gong, Hui; Deng, Yong; Fu, Jianwei; Luo, Qingming
2011-02-01
High-speed fluorescence molecular tomography (FMT) reconstruction for 3-D heterogeneous media is still one of the most challenging problems in diffusive optical fluorescence imaging. In this paper, we propose a fast FMT reconstruction method that is based on Monte Carlo (MC) simulation and accelerated by a cluster of graphics processing units (GPUs). Based on the Message Passing Interface standard, we modified the MC code for fast FMT reconstruction, and different Green's functions representing the flux distribution in media are calculated simultaneously by different GPUs in the cluster. A load-balancing method was also developed to increase the computational efficiency. By applying the Fréchet derivative, a Jacobian matrix is formed to reconstruct the distribution of the fluorochromes using the calculated Green's functions. Phantom experiments have shown that only 10 min are required to get reconstruction results with a cluster of 6 GPUs, rather than 6 h with a cluster of multiple dual opteron CPU nodes. Because of the advantages of high accuracy and suitability for 3-D heterogeneity media with refractive-index-unmatched boundaries from the MC simulation, the GPU cluster-accelerated method provides a reliable approach to high-speed reconstruction for FMT imaging.
Shypailo, R J; Ellis, K J
2011-05-21
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of (40)K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
Energy Technology Data Exchange (ETDEWEB)
Shypailo, R J; Ellis, K J, E-mail: shypailo@bcm.edu [USDA/ARS Children' s Nutrition Research Center, Baylor College of Medicine, 1100 Bates Street, Houston, TX 77030 (United States)
2011-05-21
During construction of the whole body counter (WBC) at the Children's Nutrition Research Center (CNRC), efficiency calibration was needed to translate acquired counts of {sup 40}K to actual grams of potassium for measurement of total body potassium (TBK) in a diverse subject population. The MCNP Monte Carlo n-particle simulation program was used to describe the WBC (54 detectors plus shielding), test individual detector counting response, and create a series of virtual anthropomorphic phantoms based on national reference anthropometric data. Each phantom included an outer layer of adipose tissue and an inner core of lean tissue. Phantoms were designed for both genders representing ages 3.5 to 18.5 years with body sizes from the 5th to the 95th percentile based on body weight. In addition, a spherical surface source surrounding the WBC was modeled in order to measure the effects of subject mass on room background interference. Individual detector measurements showed good agreement with the MCNP model. The background source model came close to agreement with empirical measurements, but showed a trend deviating from unity with increasing subject size. Results from the MCNP simulation of the CNRC WBC agreed well with empirical measurements using BOMAB phantoms. Individual detector efficiency corrections were used to improve the accuracy of the model. Nonlinear multiple regression efficiency calibration equations were derived for each gender. Room background correction is critical in improving the accuracy of the WBC calibration.
DEPLETION POTENTIAL OF COLLOIDS:A DIRECT SIMULATION STUDY
Institute of Scientific and Technical Information of China (English)
李卫华; 薛松; 马红孺
2001-01-01
The depletion interaction between abig sphere and a hard wall and between two big hard spheres in a hard sphere colloidal sytem was studied by the Monte Carlo method.Direct simulation of free energy difference was performed by means of the Acceptance Ratio Method (ARM).
Energy Technology Data Exchange (ETDEWEB)
Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, 31062 Toulouse (France); McKay, Erin [St George Hospital, Gray Street, Kogarah, New South Wales 2217 (Australia); Ferrer, Ludovic [ICO René Gauducheau, Boulevard Jacques Monod, St Herblain 44805 (France); Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila [European Institute of Oncology, Via Ripamonti 435, Milano 20141 (Italy); Bardiès, Manuel [UMR 1037 INSERM/UPS, CRCT, 133 Route de Narbonne, Toulouse 31062 (France)
2015-12-15
Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry
MC-Net: a method for the construction of phylogenetic networks based on the Monte-Carlo method
Directory of Open Access Journals (Sweden)
Eslahchi Changiz
2010-08-01
Full Text Available Abstract Background A phylogenetic network is a generalization of phylogenetic trees that allows the representation of conflicting signals or alternative evolutionary histories in a single diagram. There are several methods for constructing these networks. Some of these methods are based on distances among taxa. In practice, the methods which are based on distance perform faster in comparison with other methods. The Neighbor-Net (N-Net is a distance-based method. The N-Net produces a circular ordering from a distance matrix, then constructs a collection of weighted splits using circular ordering. The SplitsTree which is a program using these weighted splits makes a phylogenetic network. In general, finding an optimal circular ordering is an NP-hard problem. The N-Net is a heuristic algorithm to find the optimal circular ordering which is based on neighbor-joining algorithm. Results In this paper, we present a heuristic algorithm to find an optimal circular ordering based on the Monte-Carlo method, called MC-Net algorithm. In order to show that MC-Net performs better than N-Net, we apply both algorithms on different data sets. Then we draw phylogenetic networks corresponding to outputs of these algorithms using SplitsTree and compare the results. Conclusions We find that the circular ordering produced by the MC-Net is closer to optimal circular ordering than the N-Net. Furthermore, the networks corresponding to outputs of MC-Net made by SplitsTree are simpler than N-Net.
Energy Technology Data Exchange (ETDEWEB)
Visvikis, D. [INSERM U650, LaTIM, University Hospital Medical School, F-29609 Brest (France)]. E-mail: Visvikis.Dimitris@univ-brest.fr; Lefevre, T. [INSERM U650, LaTIM, University Hospital Medical School, F-29609 Brest (France); Lamare, F. [INSERM U650, LaTIM, University Hospital Medical School, F-29609 Brest (France); Kontaxakis, G. [ETSI Telecomunicacion Universidad Politecnica de Madrid, Ciudad Universitaria, s/n 28040, Madrid (Spain); Santos, A. [ETSI Telecomunicacion Universidad Politecnica de Madrid, Ciudad Universitaria, s/n 28040, Madrid (Spain); Darambara, D. [Department of Physics, School of Engineering and Physical Sciences, University of Surrey, Guildford (United Kingdom)
2006-12-20
The majority of present position emission tomography (PET) animal systems are based on the coupling of high-density scintillators and light detectors. A disadvantage of these detector configurations is the compromise between image resolution, sensitivity and energy resolution. In addition, current combined imaging devices are based on simply placing back-to-back and in axial alignment different apparatus without any significant level of software or hardware integration. The use of semiconductor CdZnTe (CZT) detectors is a promising alternative to scintillators for gamma-ray imaging systems. At the same time CZT detectors have the potential properties necessary for the construction of a truly integrated imaging device (PET/SPECT/CT). The aims of this study was to assess the performance of different small animal PET scanner architectures based on CZT pixellated detectors and compare their performance with that of state of the art existing PET animal scanners. Different scanner architectures were modelled using GATE (Geant4 Application for Tomographic Emission). Particular scanner design characteristics included an overall cylindrical scanner format of 8 and 24 cm in axial and transaxial field of view, respectively, and a temporal coincidence window of 8 ns. Different individual detector modules were investigated, considering pixel pitch down to 0.625 mm and detector thickness from 1 to 5 mm. Modified NEMA NU2-2001 protocols were used in order to simulate performance based on mouse, rat and monkey imaging conditions. These protocols allowed us to directly compare the performance of the proposed geometries with the latest generation of current small animal systems. Results attained demonstrate the potential for higher NECR with CZT based scanners in comparison to scintillator based animal systems.
Yuan, Jiankui; Zheng, Yiran; Wessels, Barry; Lo, Simon S; Ellis, Rodney; Machtay, Mitchell; Yao, Min
2016-12-01
A virtual source model for Monte Carlo simulations of helical TomoTherapy has been developed previously by the authors. The purpose of this work is to perform experiments in an anthropomorphic (RANDO) phantom with the same order of complexity as in clinical treatments to validate the virtual source model to be used for quality assurance secondary check on TomoTherapy patient planning dose. Helical TomoTherapy involves complex delivery pattern with irregular beam apertures and couch movement during irradiation. Monte Carlo simulation, as the most accurate dose algorithm, is desirable in radiation dosimetry. Current Monte Carlo simulations for helical TomoTherapy adopt the full Monte Carlo model, which includes detailed modeling of individual machine component, and thus, large phase space files are required at different scoring planes. As an alternative approach, we developed a virtual source model without using the large phase space files for the patient dose calculations previously. In this work, we apply the simulation system to recompute the patient doses, which were generated by the treatment planning system in an anthropomorphic phantom to mimic the real patient treatments. We performed thermoluminescence dosimeter point dose and film measurements to compare with Monte Carlo results. Thermoluminescence dosimeter measurements show that the relative difference in both Monte Carlo and treatment planning system is within 3%, with the largest difference less than 5% for both the test plans. The film measurements demonstrated 85.7% and 98.4% passing rate using the 3 mm/3% acceptance criterion for the head and neck and lung cases, respectively. Over 95% passing rate is achieved if 4 mm/4% criterion is applied. For the dose-volume histograms, very good agreement is obtained between the Monte Carlo and treatment planning system method for both cases. The experimental results demonstrate that the virtual source model Monte Carlo system can be a viable option for the
Shafiei, M.; Gharari, S.; Pande, S.; Bhulai, S.
2014-01-01
Posterior sampling methods are increasingly being used to describe parameter and model predictive uncertainty in hydrologic modelling. This paper proposes an alternative to random walk chains (such as DREAM-zs). We propose a sampler based on independence chains with an embedded feature of standardiz
Cully, William P.L.; Cotton, Simon L.; Scanlon, William G.
2012-01-01
The range of potential applications for indoor and campus based personnel localisation has led researchers to create a wide spectrum of different algorithmic approaches and systems. However, the majority of the proposed systems overlook the unique radio environment presented by the human body leadin
Energy Technology Data Exchange (ETDEWEB)
Chi, Y; Li, Y; Tian, Z; Gu, X; Jiang, S; Jia, X [UT Southwestern Medical Center, Dallas, TX (United States)
2015-06-15
Purpose: Pencil-beam or superposition-convolution type dose calculation algorithms are routinely used in inverse plan optimization for intensity modulated radiation therapy (IMRT). However, due to their limited accuracy in some challenging cases, e.g. lung, the resulting dose may lose its optimality after being recomputed using an accurate algorithm, e.g. Monte Carlo (MC). It is the objective of this study to evaluate the feasibility and advantages of a new method to include MC in the treatment planning process. Methods: We developed a scheme to iteratively perform MC-based beamlet dose calculations and plan optimization. In the MC stage, a GPU-based dose engine was used and the particle number sampled from a beamlet was proportional to its optimized fluence from the previous step. We tested this scheme in four lung cancer IMRT cases. For each case, the original plan dose, plan dose re-computed by MC, and dose optimized by our scheme were obtained. Clinically relevant dosimetric quantities in these three plans were compared. Results: Although the original plan achieved a satisfactory PDV dose coverage, after re-computing doses using MC method, it was found that the PTV D95% were reduced by 4.60%–6.67%. After re-optimizing these cases with our scheme, the PTV coverage was improved to the same level as in the original plan, while the critical OAR coverages were maintained to clinically acceptable levels. Regarding the computation time, it took on average 144 sec per case using only one GPU card, including both MC-based beamlet dose calculation and treatment plan optimization. Conclusion: The achieved dosimetric gains and high computational efficiency indicate the feasibility and advantages of the proposed MC-based IMRT optimization method. Comprehensive validations in more patient cases are in progress.
Niccolini, G.; Alcolea, J.
Solving the radiative transfer problem is a common problematic to may fields in astrophysics. With the increasing angular resolution of spatial or ground-based telescopes (VLTI, HST) but also with the next decade instruments (NGST, ALMA, ...), astrophysical objects reveal and will certainly reveal complex spatial structures. Consequently, it is necessary to develop numerical tools being able to solve the radiative transfer equation in three dimensions in order to model and interpret these observations. I present a 3D radiative transfer program, using a new method for the construction of an adaptive spatial grid, based on the Monte Claro method. With the help of this tools, one can solve the continuum radiative transfer problem (e.g. a dusty medium), computes the temperature structure of the considered medium and obtain the flux of the object (SED and images).
Scalability tests of R-GMA based Grid job monitoring system for CMS Monte Carlo data production
Bonacorsi, D; Field, L; Fisher, S; Grandi, C; Hobson, P R; Kyberd, P; MacEvoy, B; Nebrensky, J J; Tallini, H; Traylen, S
2004-01-01
High Energy Physics experiments such as CMS (Compact Muon Solenoid) at the Large Hadron Collider have unprecedented, large-scale data processing computing requirements, with data accumulating at around 1 Gbyte/s. The Grid distributed computing paradigm has been chosen as the solution to provide the requisite computing power. The demanding nature of CMS software and computing requirements, such as the production of large quantities of Monte Carlo simulated data, makes them an ideal test case for the Grid and a major driver for the development of Grid technologies. One important challenge when using the Grid for large-scale data analysis is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. R-GMA is a monitoring and information management service for distributed resources based on the Grid Monitoring Architecture of the Global Grid Forum. In this paper we report on the first measurements of R-GMA as part of a monitoring architecture to be used for b...
Investigation of the CRT performance of a PET scanner based in liquid xenon: A Monte Carlo study
Gomez-Cadenas, J J; Ferrario, P; Monrabal, F; Rodríguez, J; Toledo, J F
2016-01-01
The measurement of the time of flight of the two 511 keV gammas recorded in coincidence in a PET scanner provides an effective way of reducing the random background and therefore increases the scanner sensitivity, provided that the coincidence resolving time (CRT) of the gammas is sufficiently good. Existing commercial systems based in LYSO crystals, such as the GEMINIS of Philips, reach CRT values of ~ 600 ps (FWHM). In this paper we present a Monte Carlo investigation of the CRT performance of a PET scanner exploiting the scintillating properties of liquid xenon. We find that an excellent CRT of 60-70 ps (depending on the PDE of the sensor) can be obtained if the scanner is instrumented with silicon photomultipliers (SiPMs) sensitive to the ultraviolet light emitted by xenon. Alternatively, a CRT of 120 ps can be obtained instrumenting the scanner with (much cheaper) blue-sensitive SiPMs coated with a suitable wavelength shifter. These results show the excellent time of flight capabilities of a PET device b...
Joshi, Kaushik; Chaudhuri, Santanu
2016-10-01
Ability to accelerate the morphological evolution of nanoscale precipitates is a fundamental challenge for atomistic simulations. Kinetic Monte Carlo (KMC) methodology is an effective approach for accelerating the evolution of nanoscale systems that are dominated by so-called rare events. The quality and accuracy of energy landscape used in KMC calculations can be significantly improved using DFT-informed interatomic potentials. Using newly developed computational framework that uses molecular simulator LAMMPS as a library function inside KMC solver SPPARKS, we investigated formation and growth of Guiner–Preston (GP) zones in dilute Al–Cu alloys at different temperature and copper concentrations. The KMC simulations with angular dependent potential (ADP) predict formation of coherent disc-shaped monolayers of copper atoms (GPI zones) in early stage. Such monolayers are then gradually transformed into energetically favored GPII phase that has two aluminum layers sandwiched between copper layers. We analyzed the growth kinetics of KMC trajectory using Johnson–Mehl–Avrami (JMA) theory and obtained a phase transformation index close to 1.0. In the presence of grain boundaries, the KMC calculations predict the segregation of copper atoms near the grain boundaries instead of formation of GP zones. The computational framework presented in this work is based on open source potentials and MD simulator and can predict morphological changes during the evolution of the alloys in the bulk and around grain boundaries.
Energy Technology Data Exchange (ETDEWEB)
Brunetti, Antonio; Golosio, Bruno [Universita degli Studi di Sassari, Dipartimento di Scienze Politiche, Scienze della Comunicazione e Ingegneria dell' Informazione, Sassari (Italy); Melis, Maria Grazia [Universita degli Studi di Sassari, Dipartimento di Storia, Scienze dell' Uomo e della Formazione, Sassari (Italy); Mura, Stefania [Universita degli Studi di Sassari, Dipartimento di Agraria e Nucleo di Ricerca sulla Desertificazione, Sassari (Italy)
2014-11-08
X-ray fluorescence (XRF) is a well known nondestructive technique. It is also applied to multilayer characterization, due to its possibility of estimating both composition and thickness of the layers. Several kinds of cultural heritage samples can be considered as a complex multilayer, such as paintings or decorated objects or some types of metallic samples. Furthermore, they often have rough surfaces and this makes a precise determination of the structure and composition harder. The standard quantitative XRF approach does not take into account this aspect. In this paper, we propose a novel approach based on a combined use of X-ray measurements performed with a polychromatic beam and Monte Carlo simulations. All the information contained in an X-ray spectrum is used. This approach allows obtaining a very good estimation of the sample contents both in terms of chemical elements and material thickness, and in this sense, represents an improvement of the possibility of XRF measurements. Some examples will be examined and discussed. (orig.)
Wierling, Christoph; Kühn, Alexander; Hache, Hendrik; Daskalaki, Andriani; Maschke-Dutz, Elisabeth; Peycheva, Svetlana; Li, Jian; Herwig, Ralf; Lehrach, Hans
2012-08-15
Cancer is known to be a complex disease and its therapy is difficult. Much information is available on molecules and pathways involved in cancer onset and progression and this data provides a valuable resource for the development of predictive computer models that can help to identify new potential drug targets or to improve therapies. Modeling cancer treatment has to take into account many cellular pathways usually leading to the construction of large mathematical models. The development of such models is complicated by the fact that relevant parameters are either completely unknown, or can at best be measured under highly artificial conditions. Here we propose an approach for constructing predictive models of such complex biological networks in the absence of accurate knowledge on parameter values, and apply this strategy to predict the effects of perturbations induced by anti-cancer drug target inhibitions on an epidermal growth factor (EGF) signaling network. The strategy is based on a Monte Carlo approach, in which the kinetic parameters are repeatedly sampled from specific probability distributions and used for multiple parallel simulations. Simulation results from different forms of the model (e.g., a model that expresses a certain mutation or mutation pattern or the treatment by a certain drug or drug combination) can be compared with the unperturbed control model and used for the prediction of the perturbation effects. This framework opens the way to experiment with complex biological networks in the computer, likely to save costs in drug development and to improve patient therapy.
Study on the Uncertainty of the Available Time Under Ship Fire Based on Monte Carlo Sampling Method
Institute of Scientific and Technical Information of China (English)
WANG Jin-hui; CHU Guan-quan; LI Kai-yuan
2013-01-01
Available safety egress time under ship fire (SFAT) is critical to ship fire safety assessment,design and emergency rescue.Although it is available to determine SFAT by using fire models such as the two-zone fire model CFAST and the field model FDS,none of these models can address the uncertainties involved in the input parameters.To solve this problem,current study presents a framework of uncertainty analysis for SFAT.Firstly,a deterministic model estimating SFAT is built.The uncertainties of the input parameters are regarded as random variables with the given probability distribution functions.Subsequently,the deterministic SFAT model is employed to couple with a Monte Carlo sampling method to investigate the uncertainties of the SFAT.The Spearman's rank-order correlation coefficient (SRCC) is used to examine the sensitivity of each input uncertainty parameter on SFAT.To illustrate the proposed approach in detail,a case study is performed.Based on the proposed approach,probability density function and cumulative density function of SFAT are obtained.Furthermore,sensitivity analysis with regard to SFAT is also conducted.The results give a high-negative correlation of SFAT and the fire growth coefficient whereas the effect of other parameters is so weak that they can be neglected.
Specification for the VERA Depletion Benchmark Suite
Energy Technology Data Exchange (ETDEWEB)
Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-12-17
CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.
Directory of Open Access Journals (Sweden)
S. Maiti
2011-03-01
Full Text Available Koyna region is well-known for its triggered seismic activities since the hazardous earthquake of M=6.3 occurred around the Koyna reservoir on 10 December 1967. Understanding the shallow distribution of resistivity pattern in such a seismically critical area is vital for mapping faults, fractures and lineaments. However, deducing true resistivity distribution from the apparent resistivity data lacks precise information due to intrinsic non-linearity in the data structures. Here we present a new technique based on the Bayesian neural network (BNN theory using the concept of Hybrid Monte Carlo (HMC/Markov Chain Monte Carlo (MCMC simulation scheme. The new method is applied to invert one and two-dimensional Direct Current (DC vertical electrical sounding (VES data acquired around the Koyna region in India. Prior to apply the method on actual resistivity data, the new method was tested for simulating synthetic signal. In this approach the objective/cost function is optimized following the Hybrid Monte Carlo (HMC/Markov Chain Monte Carlo (MCMC sampling based algorithm and each trajectory was updated by approximating the Hamiltonian differential equations through a leapfrog discretization scheme. The stability of the new inversion technique was tested in presence of correlated red noise and uncertainty of the result was estimated using the BNN code. The estimated true resistivity distribution was compared with the results of singular value decomposition (SVD-based conventional resistivity inversion results. Comparative results based on the HMC-based Bayesian Neural Network are in good agreement with the existing model results, however in some cases, it also provides more detail and precise results, which appears to be justified with local geological and structural details. The new BNN approach based on HMC is faster and proved to be a promising inversion scheme to interpret complex and non-linear resistivity problems. The HMC-based BNN results
Monte Carlo integration on GPU
Kanzaki, J.
2010-01-01
We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C...
An automated Monte-Carlo based method for the calculation of cascade summing factors
Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.
2016-10-01
A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.
Depletion Interactions in a Cylindric Pipeline with a Small Shape Change
Institute of Scientific and Technical Information of China (English)
LI Chun-Shu; GAO Hai-Xia; XIAO Chang-Ming
2007-01-01
Stressed by external forces, it is possible for a cylindric pipeline to change into an elliptic pipeline. To expose the effect of small shape change of the pipeline on the depletion interactions, both the depletion potentials and depletion forces in the hard sphere systems confined by a cylindric pipeline or by an elliptic pipeline are studied by Monte Carlo simulations. The numerical results show that the depletion interactions are strongly affected by the small change of the shape of the pipeline in a way. Furthermore, it is also found that the depletion interactions will be strengthened if the short axis of the elliptic pipeline is decreased.
Steckiewicz, M.; Garnier, P.; André, N.; Mitchell, D. L.; Andersson, L.; Penou, E.; Beth, A.; Fedorov, A.; Sauvaud, J.-A.; Mazelle, C.; Brain, D. A.; Espley, J. R.; McFadden, J.; Halekas, J. S.; Larson, D. E.; Lillis, R. J.; Luhmann, J. G.; Soobiah, Y.; Jakosky, B. M.
2017-01-01
Nightside suprathermal electron depletions have been observed at Mars by three spacecraft to date: Mars Global Surveyor, Mars Express, and the Mars Atmosphere and Volatile EvolutioN (MAVEN) mission. This spatial and temporal diversity of measurements allows us to propose here a comprehensive view of the Martian electron depletions through the first multispacecraft study of the phenomenon. We have analyzed data recorded by the three spacecraft from 1999 to 2015 in order to better understand the distribution of the electron depletions and their creation mechanisms. Three simple criteria adapted to each mission have been implemented to identify more than 134,500 electron depletions observed between 125 and 900 km altitude. The geographical distribution maps of the electron depletions detected by the three spacecraft confirm the strong link existing between electron depletions and crustal magnetic field at altitudes greater than 170 km. At these altitudes, the distribution of electron depletions is strongly different in the two hemispheres, with a far greater chance to observe an electron depletion in the Southern Hemisphere, where the strongest crustal magnetic sources are located. However, the unique MAVEN observations reveal that below a transition region near 160-170 km altitude the distribution of electron depletions is the same in both hemispheres, with no particular dependence on crustal magnetic fields. This result supports the suggestion made by previous studies that these low-altitudes events are produced through electron absorption by atmospheric CO2.
Simulation of Ni-63 based nuclear micro battery using Monte Carlo modeling
Energy Technology Data Exchange (ETDEWEB)
Kim, Tae Ho; Kim, Ji Hyun [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)
2013-10-15
The radioisotope batteries have an energy density of 100-10000 times greater than chemical batteries. Also, Li ion battery has the fundamental problems such as short life time and requires recharge system. In addition to these things, the existing batteries are hard to operate at internal human body, national defense arms or space environment. Since the development of semiconductor process and materials technology, the micro device is much more integrated. It is expected that, based on new semiconductor technology, the conversion device efficiency of betavoltaic battery will be highly increased. Furthermore, the radioactivity from the beta particle cannot penetrate a skin of human body, so it is safer than Li battery which has the probability to explosion. In the other words, the interest for radioisotope battery is increased because it can be applicable to an artificial internal organ power source without recharge and replacement, micro sensor applied to arctic and special environment, small size military equipment and space industry. However, there is not enough data for beta particle fluence from radioisotope source using nuclear battery. Beta particle fluence directly influences on battery efficiency and it is seriously affected by radioisotope source thickness because of self-absorption effect. Therefore, in this article, we present a basic design of Ni-63 nuclear battery and simulation data of beta particle fluence with various thickness of radioisotope source and design of battery.
Wen, Xiulan; Xu, Youxiong; Li, Hongsheng; Wang, Fenglin; Sheng, Danghong
2012-09-01
Straightness error is an important parameter in measuring high-precision shafts. New generation geometrical product specification(GPS) requires the measurement uncertainty characterizing the reliability of the results should be given together when the measurement result is given. Nowadays most researches on straightness focus on error calculation and only several research projects evaluate the measurement uncertainty based on "The Guide to the Expression of Uncertainty in Measurement(GUM)". In order to compute spatial straightness error(SSE) accurately and rapidly and overcome the limitations of GUM, a quasi particle swarm optimization(QPSO) is proposed to solve the minimum zone SSE and Monte Carlo Method(MCM) is developed to estimate the measurement uncertainty. The mathematical model of minimum zone SSE is formulated. In QPSO quasi-random sequences are applied to the generation of the initial position and velocity of particles and their velocities are modified by the constriction factor approach. The flow of measurement uncertainty evaluation based on MCM is proposed, where the heart is repeatedly sampling from the probability density function(PDF) for every input quantity and evaluating the model in each case. The minimum zone SSE of a shaft measured on a Coordinate Measuring Machine(CMM) is calculated by QPSO and the measurement uncertainty is evaluated by MCM on the basis of analyzing the uncertainty contributors. The results show that the uncertainty directly influences the product judgment result. Therefore it is scientific and reasonable to consider the influence of the uncertainty in judging whether the parts are accepted or rejected, especially for those located in the uncertainty zone. The proposed method is especially suitable when the PDF of the measurand cannot adequately be approximated by a Gaussian distribution or a scaled and shifted t-distribution and the measurement model is non-linear.
Li, Dong; Chen, Bin; Ran, Wei Yu; Wang, Guo Xiang; Wu, Wen Juan
2015-01-01
The voxel-based Monte Carlo method (VMC) is now a gold standard in the simulation of light propagation in turbid media. For complex tissue structures, however, the computational cost will be higher when small voxels are used to improve smoothness of tissue interface and a large number of photons are used to obtain accurate results. To reduce computational cost, criteria were proposed to determine the voxel size and photon number in 3-dimensional VMC simulations with acceptable accuracy and computation time. The selection of the voxel size can be expressed as a function of tissue geometry and optical properties. The photon number should be at least 5 times the total voxel number. These criteria are further applied in developing a photon ray splitting scheme of local grid refinement technique to reduce computational cost of a nonuniform tissue structure with significantly varying optical properties. In the proposed technique, a nonuniform refined grid system is used, where fine grids are used for the tissue with high absorption and complex geometry, and coarse grids are used for the other part. In this technique, the total photon number is selected based on the voxel size of the coarse grid. Furthermore, the photon-splitting scheme is developed to satisfy the statistical accuracy requirement for the dense grid area. Result shows that local grid refinement technique photon ray splitting scheme can accelerate the computation by 7.6 times (reduce time consumption from 17.5 to 2.3 h) in the simulation of laser light energy deposition in skin tissue that contains port wine stain lesions.
Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming
2016-07-01
Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model
Monte Carlo simulation of moderator and reflector in coal analyzer based on a D-T neutron generator.
Shan, Qing; Chu, Shengnan; Jia, Wenbao
2015-11-01
Coal is one of the most popular fuels in the world. The use of coal not only produces carbon dioxide, but also contributes to the environmental pollution by heavy metals. In prompt gamma-ray neutron activation analysis (PGNAA)-based coal analyzer, the characteristic gamma rays of C and O are mainly induced by fast neutrons, whereas thermal neutrons can be used to induce the characteristic gamma rays of H, Si, and heavy metals. Therefore, appropriate thermal and fast neutrons are beneficial in improving the measurement accuracy of heavy metals, and ensure that the measurement accuracy of main elements meets the requirements of the industry. Once the required yield of the deuterium-tritium (d-T) neutron generator is determined, appropriate thermal and fast neutrons can be obtained by optimizing the neutron source term. In this article, the Monte Carlo N-Particle (MCNP) Transport Code and Evaluated Nuclear Data File (ENDF) database are used to optimize the neutron source term in PGNAA-based coal analyzer, including the material and shape of the moderator and neutron reflector. The optimized targets include two points: (1) the ratio of the thermal to fast neutron is 1:1 and (2) the total neutron flux from the optimized neutron source in the sample increases at least 100% when compared with the initial one. The simulation results show that, the total neutron flux in the sample increases 102%, 102%, 85%, 72%, and 62% with Pb, Bi, Nb, W, and Be reflectors, respectively. Maximum optimization of the targets is achieved when the moderator is a 3-cm-thick lead layer coupled with a 3-cm-thick high-density polyethylene (HDPE) layer, and the neutron reflector is a 27-cm-thick hemispherical lead layer.
Tian, Zhen; Li, Yongbao; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-01-01
We recently built an analytical source model for GPU-based MC dose engine. In this paper, we present a sampling strategy to efficiently utilize this source model in GPU-based dose calculation. Our source model was based on a concept of phase-space-ring (PSR). This ring structure makes it effective to account for beam rotational symmetry, but not suitable for dose calculations due to rectangular jaw settings. Hence, we first convert PSR source model to its phase-space let (PSL) representation. Then in dose calculation, different types of sub-sources were separately sampled. Source sampling and particle transport were iterated. So that the particles being sampled and transported simultaneously are of same type and close in energy to alleviate GPU thread divergence. We also present an automatic commissioning approach to adjust the model for a good representation of a clinical linear accelerator . Weighting factors were introduced to adjust relative weights of PSRs, determined by solving a quadratic minimization ...
Shear-affected depletion interaction
July, C.; Kleshchanok, D.; Lang, P.R.
2012-01-01
We investigate the influence of flow fields on the strength of the depletion interaction caused by disc-shaped depletants. At low mass concentration of discs, it is possible to continuously decrease the depth of the depletion potential by increasing the applied shear rate until the depletion force i
Validation of a GPU-based Monte Carlo code (gPMC) for proton radiation therapy: clinical cases study
Giantsoudi, Drosoula; Schuemann, Jan; Jia, Xun; Dowdell, Stephen; Jiang, Steve; Paganetti, Harald
2015-03-01
Monte Carlo (MC) methods are recognized as the gold-standard for dose calculation, however they have not replaced analytical methods up to now due to their lengthy calculation times. GPU-based applications allow MC dose calculations to be performed on time scales comparable to conventional analytical algorithms. This study focuses on validating our GPU-based MC code for proton dose calculation (gPMC) using an experimentally validated multi-purpose MC code (TOPAS) and compare their performance for clinical patient cases. Clinical cases from five treatment sites were selected covering the full range from very homogeneous patient geometries (liver) to patients with high geometrical complexity (air cavities and density heterogeneities in head-and-neck and lung patients) and from short beam range (breast) to large beam range (prostate). Both gPMC and TOPAS were used to calculate 3D dose distributions for all patients. Comparisons were performed based on target coverage indices (mean dose, V95, D98, D50, D02) and gamma index distributions. Dosimetric indices differed less than 2% between TOPAS and gPMC dose distributions for most cases. Gamma index analysis with 1%/1 mm criterion resulted in a passing rate of more than 94% of all patient voxels receiving more than 10% of the mean target dose, for all patients except for prostate cases. Although clinically insignificant, gPMC resulted in systematic underestimation of target dose for prostate cases by 1-2% compared to TOPAS. Correspondingly the gamma index analysis with 1%/1 mm criterion failed for most beams for this site, while for 2%/1 mm criterion passing rates of more than 94.6% of all patient voxels were observed. For the same initial number of simulated particles, calculation time for a single beam for a typical head and neck patient plan decreased from 4 CPU hours per million particles (2.8-2.9 GHz Intel X5600) for TOPAS to 2.4 s per million particles (NVIDIA TESLA C2075) for gPMC. Excellent agreement was
Brake, te B.; Hanssen, R.F.; Ploeg, van der M.J.; Rooij, de G.H.
2013-01-01
Satellite-based radar interferometry is a technique capable of measuring small surface elevation changes at large scales and with a high resolution. In vadose zone hydrology, it has been recognized for a long time that surface elevation changes due to swell and shrinkage of clayey soils can serve as
Barbeiro, A. R.; Ureba, A.; Baeza, J. A.; Linares, R.; Perucha, M.; Jiménez-Ortega, E.; Velázquez, S.; Mateos, J. C.
2016-01-01
A model based on a specific phantom, called QuAArC, has been designed for the evaluation of planning and verification systems of complex radiotherapy treatments, such as volumetric modulated arc therapy (VMAT). This model uses the high accuracy provided by the Monte Carlo (MC) simulation of log files and allows the experimental feedback from the high spatial resolution of films hosted in QuAArC. This cylindrical phantom was specifically designed to host films rolled at different radial distances able to take into account the entrance fluence and the 3D dose distribution. Ionization chamber measurements are also included in the feedback process for absolute dose considerations. In this way, automated MC simulation of treatment log files is implemented to calculate the actual delivery geometries, while the monitor units are experimentally adjusted to reconstruct the dose-volume histogram (DVH) on the patient CT. Prostate and head and neck clinical cases, previously planned with Monaco and Pinnacle treatment planning systems and verified with two different commercial systems (Delta4 and COMPASS), were selected in order to test operational feasibility of the proposed model. The proper operation of the feedback procedure was proved through the achieved high agreement between reconstructed dose distributions and the film measurements (global gamma passing rates > 90% for the 2%/2 mm criteria). The necessary discretization level of the log file for dose calculation and the potential mismatching between calculated control points and detection grid in the verification process were discussed. Besides the effect of dose calculation accuracy of the analytic algorithm implemented in treatment planning systems for a dynamic technique, it was discussed the importance of the detection density level and its location in VMAT specific phantom to obtain a more reliable DVH in the patient CT. The proposed model also showed enough robustness and efficiency to be considered as a pre
Muller, Anouk E; Schmitt-Hoffmann, Anne H; Punt, Nieko; Mouton, Johan W
2013-05-01
Monte Carlo simulation (MCS) of antimicrobial dosage regimens during drug development to derive predicted target attainment values is frequently used to choose the optimal dose for the treatment of patients in phase 2 and 3 studies. A criticism is that pharmacokinetic (PK) parameter estimates and variability in healthy volunteers are smaller than those in patients. In this study, the initial estimates of exposure from MCS were compared with actual exposure data in patients treated with ceftobiprole in a phase 3 nosocomial-pneumonia (NP) study (NTC00210964). Results of MCS using population PK data from ceftobiprole derived from 12 healthy volunteers were used (J. W. Mouton, A. Schmitt-Hoffmann, S. Shapiro, N. Nashed, N. C. Punt, Antimicrob. Agents Chemother. 48:1713-1718, 2004). Actual individual exposures in patients were derived after building a population pharmacokinetic model and were used to calculate the individual exposure to ceftobiprole (the percentage of time the unbound concentration exceeds the MIC [percent fT > MIC]) for a range of MIC values. For the ranges of percent fT > MIC used to determine the dosage schedule in the phase 3 NP study, the MCS using data from a single phase 1 study in healthy volunteers accurately predicted the actual clinical exposure to ceftobiprole. The difference at 50% fT > MIC at an MIC of 4 mg/liter was 3.5% for PK-sampled patients. For higher values of percent fT > MIC and MICs, the MCS slightly underestimated the target attainment, probably due to extreme values in the PK profile distribution used in the simulations. The probability of target attainment based on MCS in healthy volunteers adequately predicted the actual exposures in a patient population, including severely ill patients.
Phase space modulation method for EPID-based Monte Carlo dosimetry of IMRT and RapidArc plans
Energy Technology Data Exchange (ETDEWEB)
Berman, Avery; Townson, Reid; Bush, Karl; Zavgorodni, Sergei, E-mail: szavgorodni@bccancer.bc.c
2010-11-01
Quality assurance for IMRT and VMAT require 3D evaluation of the dose distributions from the treatment planning system as compared to the distributions reconstructed from signals acquired during the plan delivery. This study presents the results of the dose reconstruction based on a novel method of Monte Carlo (MC) phase space modulation. Typically, in MC dose calculations the linear accelerator (linac) is modelled for each field in the plan and a phase space file (PSF) containing all relevant particle information is written for each field. Particles from the PSFs are then used in the dose calculation. This study investigates a method of omitting the modelling of the linac in cases where the treatment has been measured by an electronic portal imaging device. In this method each portal image is deconvolved using an empirically fit scatter kernel to obtain the primary photon fluence. The Phase Space Modulation (PSM) method consists of simulating the linac just once to create a large PSF for an open field and then modulating it using the delivered primary particle fluence. Reconstructed dose distributions in phantoms were produced using MC and the modulated PSFs. The kernel derived for this method accurately reproduced the dose distributions for 3x3, 10x10, and 15x15 cm{sup 2} field sizes (mean relative dose-difference along the beam central axis is under 1%). The method has been applied to IMRT pre-treatment verification of 10 patients (including one RapidArc{sup TM} case), mean dose in the structures of interest agreed with that calculated by MC directly within 1%, and 95% of the voxels passed 2%/2mm criteria.
Ozone depletion, paradigms, and politics
Energy Technology Data Exchange (ETDEWEB)
Iman, R.L.
1993-10-01
The destruction of the Earth`s protective ozone layer is a prime environmental concern. Industry has responded to this environmental problem by: implementing conservation techniques to reduce the emission of ozone-depleting chemicals (ODCs); using alternative cleaning solvents that have lower ozone depletion potentials (ODPs); developing new, non-ozone-depleting solvents, such as terpenes; and developing low-residue soldering processes. This paper presents an overview of a joint testing program at Sandia and Motorola to evaluate a low-residue (no-clean) soldering process for printed wiring boards (PWBs). Such processes are in widespread use in commercial applications because they eliminate the cleaning operation. The goal of this testing program was to develop a data base that could be used to support changes in the mil-specs. In addition, a joint task force involving industry and the military has been formed to conduct a follow-up evaluation of low-residue processes that encompass the concerns of the tri-services. The goal of the task force is to gain final approval of the low-residue technology for use in military applications.
A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)
Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun
2015-09-01
Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by
Semenenko, V A; Stewart, R D
2005-08-01
Clustered damage sites other than double-strand breaks (DSBs) have the potential to contribute to deleterious effects of ionizing radiation, such as cell killing and mutagenesis. In the companion article (Semenenko et al., Radiat. Res. 164, 180-193, 2005), a general Monte Carlo framework to simulate key steps in the base and nucleotide excision repair of DNA damage other than DSBs is proposed. In this article, model predictions are compared to measured data for selected low-and high-LET radiations. The Monte Carlo model reproduces experimental observations for the formation of enzymatic DSBs in Escherichia coli and cells of two Chinese hamster cell lines (V79 and xrs5). Comparisons of model predictions with experimental values for low-LET radiation suggest that an inhibition of DNA backbone incision at the sites of base damage by opposing strand breaks is active over longer distances between the damaged base and the strand break in hamster cells (8 bp) compared to E. coli (3 bp). Model estimates for the induction of point mutations in the human hypoxanthine guanine phosphoribosyl transferase (HPRT) gene by ionizing radiation are of the same order of magnitude as the measured mutation frequencies. Trends in the mutation frequency for low- and high-LET radiation are predicted correctly by the model. The agreement between selected experimental data sets and simulation results provides some confidence in postulated mechanisms for excision repair of DNA damage other than DSBs and suggests that the proposed Monte Carlo scheme is useful for predicting repair outcomes.
Institute of Scientific and Technical Information of China (English)
Jiang Wei; Xiang Haige
2004-01-01
This paper addresses the issues of channel estimation in a Multiple-Input/Multiple-Output (MIMO) system. Markov Chain Monte Carlo (MCMC) method is employed to jointly estimate the Channel State Information (CSI) and the transmitted signals. The deduced algorithms can work well under circumstances of low Signal-to-Noise Ratio (SNR). Simulation results are presented to demonstrate their effectiveness.
Muller, A.E.; Schmitt-Hoffmann, A.H.; Punt, N.; Mouton, J.W.
2013-01-01
Monte Carlo simulation (MCS) of antimicrobial dosage regimens during drug development to derive predicted target attainment values is frequently used to choose the optimal dose for the treatment of patients in phase 2 and 3 studies. A criticism is that pharmacokinetic (PK) parameter estimates and va
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
Obot, I. B.; Kaya, Savaş; Kaya, Cemal; Tüzün, Burak
2016-06-01
DFT and Monte Carlo simulation were performed on three Schiff bases namely, 4-(4-bromophenyl)-N‧-(4-methoxybenzylidene)thiazole-2-carbohydrazide (BMTC), 4-(4-bromophenyl)-N‧-(2,4-dimethoxybenzylidene)thiazole-2-carbohydrazide (BDTC), 4-(4-bromophenyl)-N‧-(4-hydroxybenzylidene)thiazole-2-carbohydrazide (BHTC) recently studied as corrosion inhibitor for steel in acid medium. Electronic parameters relevant to their inhibition activity such as EHOMO, ELUMO, Energy gap (ΔE), hardness (η), softness (σ), the absolute electronegativity (χ), proton affinity (PA) and nucleophilicity (ω) etc., were computed and discussed. Monte Carlo simulations were applied to search for the most stable configuration and adsorption energies for the interaction of the inhibitors with Fe (110) surface. The theoretical data obtained are in most cases in agreement with experimental results.
Casas, Ricard; Cardiel-Sas, Laia; Castander, Francisco J.; Jiménez, Jorge; de Vicente, Juan
2014-08-01
The focal plane of the PAU camera is composed of eighteen 2K x 4K CCDs. These devices, plus four spares, were provided by the Japanese company Hamamatsu Photonics K.K. with type no. S10892-04(X). These detectors are 200 μm thick fully depleted and back illuminated with an n-type silicon base. They have been built with a specific coating to be sensitive in the range from 300 to 1,100 nm. Their square pixel size is 15 μm. The read-out system consists of a Monsoon controller (NOAO) and the panVIEW software package. The deafualt CCD read-out speed is 133 kpixel/s. This is the value used in the calibration process. Before installing these devices in the camera focal plane, they were characterized using the facilities of the ICE (CSIC- IEEC) and IFAE in the UAB Campus in Bellaterra (Barcelona, Catalonia, Spain). The basic tests performed for all CCDs were to obtain the photon transfer curve (PTC), the charge transfer efficiency (CTE) using X-rays and the EPER method, linearity, read-out noise, dark current, persistence, cosmetics and quantum efficiency. The X-rays images were also used for the analysis of the charge diffusion for different substrate voltages (VSUB). Regarding the cosmetics, and in addition to white and dark pixels, some patterns were also found. The first one, which appears in all devices, is the presence of half circles in the external edges. The origin of this pattern can be related to the assembly process. A second one appears in the dark images, and shows bright arcs connecting corners along the vertical axis of the CCD. This feature appears in all CCDs exactly in the same position so our guess is that the pattern is due to electrical fields. Finally, and just in two devices, there is a spot with wavelength dependence whose origin could be the result of a defectous coating process.
Seipt, D; Marklund, M; Bulanov, S S
2016-01-01
The interaction of charged particles and photons with intense electromagnetic fields gives rise to multi-photon Compton and Breit-Wheeler processes. These are usually described in the framework of the external field approximation, where the electromagnetic field is assumed to have infinite energy. However, the multi-photon nature of these processes implies the absorption of a significant number of photons, which scales as the external field amplitude cubed. As a result, the interaction of a highly charged electron bunch with an intense laser pulse can lead to significant depletion of the laser pulse energy, thus rendering the external field approximation invalid. We provide relevant estimates for this depletion and find it to become important in the interaction between fields of amplitude $a_0 \\sim 10^3$ and electron bunches with charges of the order of nC.
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Institute of Scientific and Technical Information of China (English)
ZHAO Hong-bin; KONG Xiao-xiao; LI Quan-feng; LIN Xiao-qi; BAO Shang-lian
2009-01-01
Objective:In this study,we try to establish an initial electron beam model by combining Monte Carlo simulation method with particle dynamic calculation (TRSV) for the single 6 MV X-ray accelerating waveguide of BJ- 6 medical linac. Methods and Materials:1. We adapted the treatment head configuration of BJ- 6 medical linac made by Beijing Medical Equipment Institute (BMEI) as the radiation system for this study. 2. Use particle dynamics calculation code called TRSV to drive out the initial electron beam parameters of the energy spectrum, the spatial intensity distribution, and the beam incidence angle. 3. Analyze the 6 MV X-ray beam characteristics of PDDc, OARc in a water phantom by using Monte Carlo simulation (BEAMnrc,DOSXYZnrc) for a preset of the initial electron beam parameters which have been determined by TRSV, do the comparisons of the measured results of PDDm, OARm in a real water phantom, and then use the deviations of calculated and measured results to slightly modify the initial electron beam model back and forth until the deviations meet the error less than 2%. Results:The deviations between the Monte Carlo simulation results of percentage depth doses at PDDc and off-axis ratios OARc and the measured results of PDDm and OARm in a water phantom were within 2%. Conclusion:When doing the Monte Carlo simulation to determine the parameters of an initial electron beam for a particular medical linac like BJ- 6, modifying some parameters based on the particle dynamics calculation code would give some more reasonable and more acceptable results.
Iba, Yukito
2000-01-01
``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...
Asadi, Somayeh; Masoudi, S Farhad; Rahmani, Faezeh
2014-01-01
Materials of high atomic number such as gold, can provide a high probability for photon interaction by photoelectric effects during radiation therapy. In cancer therapy, the object of brachytherapy as a kind of radiotherapy is to deliver adequate radiation dose to tumor while sparing surrounding healthy tissue. Several studies demonstrated that the preferential accumulation of gold nanoparticles within the tumor can enhance the absorbed dose by the tumor without increasing the radiation dose delivered externally. Accordingly, the required time for tumor irradiation decreases as the estimated adequate radiation dose for tumor is provided following this method. The dose delivered to healthy tissue is reduced when the time of irradiation is decreased. Hear, GNPs effects on choroidal Melanoma dosimetry is discussed by Monte Carlo study. Monte Carlo Ophthalmic brachytherapy dosimetry usually, is studied by simulation of water phantom. Considering the composition and density of eye material instead of water in thes...
Energy Technology Data Exchange (ETDEWEB)
Wuerl, Matthias
2016-08-01
Matthias Wuerl presents two essential steps to implement offline PET monitoring of proton dose delivery at a clinical facility, namely the setting up of an accurate Monte Carlo model of the clinical beamline and the experimental validation of positron emitter production cross-sections. In the first part, the field size dependence of the dose output is described for scanned proton beams. Both the Monte Carlo and an analytical computational beam model were able to accurately predict target dose, while the latter tends to overestimate dose in normal tissue. In the second part, the author presents PET measurements of different phantom materials, which were activated by the proton beam. The results indicate that for an irradiation with a high number of protons for the sake of good statistics, dead time losses of the PET scanner may become important and lead to an underestimation of positron-emitter production yields.
Ozone-depleting Substances (ODS)
U.S. Environmental Protection Agency — This site includes all of the ozone-depleting substances (ODS) recognized by the Montreal Protocol. The data include ozone depletion potentials (ODP), global warming...
DEFF Research Database (Denmark)
Strunk, Astrid; Knudsen, Mads Faurschou; Larsen, Nicolaj Krog;
investigate the landscape history in eastern and western Greenland by applying a novel Markov Chain Monte Carlo (MCMC) inversion approach to the existing 10Be-26Al data from these regions. The new MCMC approach allows us to constrain the most likely landscape history based on comparisons between simulated...... into account global changes in climate. The other free parameters include the glacial and interglacial erosion rates as well as the timing of the Holocene deglaciation. The model essentially simulates numerous different landscape scenarios based on these four parameters and zooms in on the most plausible...
Energy Technology Data Exchange (ETDEWEB)
Zhuang Guilin, E-mail: glzhuang@zjut.edu.cn [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Chen Wulin [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Zheng Jun [Center of Modern Experimental Technology, Anhui University, Hefei 230039 (China); Yu Huiyou [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China); Wang Jianguo, E-mail: jgw@zjut.edu.cn [Institute of Industrial Catalysis, College of Chemical Engineering and Materials Science, Zhejiang University of Technology, Hangzhou 310032 (China)
2012-08-15
A series of lanthanide coordination polymers have been obtained through the hydrothermal reaction of N-(sulfoethyl) iminodiacetic acid (H{sub 3}SIDA) and Ln(NO{sub 3}){sub 3} (Ln=La, 1; Pr, 2; Nd, 3; Gd, 4). Crystal structure analysis exhibits that lanthanide ions affect the coordination number, bond length and dimension of compounds 1-4, which reveal that their structure diversity can be attributed to the effect of lanthanide contraction. Furthermore, the combination of magnetic measure with quantum Monte Carlo(QMC) studies exhibits that the coupling parameters between two adjacent Gd{sup 3+} ions for anti-anti and syn-anti carboxylate bridges are -1.0 Multiplication-Sign 10{sup -3} and -5.0 Multiplication-Sign 10{sup -3} cm{sup -1}, respectively, which reveals weak antiferromagnetic interaction in 4. - Graphical abstract: Four lanthanide coordination polymers with N-(sulfoethyl) iminodiacetic acid were obtained under hydrothermal condition and reveal the weak antiferromagnetic coupling between two Gd{sup 3+} ions by Quantum Monte Carlo studies. Highlights: Black-Right-Pointing-Pointer Four lanthanide coordination polymers of H{sub 3}SIDA ligand were obtained. Black-Right-Pointing-Pointer Lanthanide ions play an important role in their structural diversity. Black-Right-Pointing-Pointer Magnetic measure exhibits that compound 4 features antiferromagnetic property. Black-Right-Pointing-Pointer Quantum Monte Carlo studies reveal the coupling parameters of two Gd{sup 3+} ions.
García, Marcos Fernández; Echeverría, Richard Jaramillo; Moll, Michael; Santos, Raúl Montero; Moya, David; Pinto, Rogelio Palomo; Vila, Iván
2016-01-01
For the first time, the deep n-well (DNW) depletion space of a High Voltage CMOS sensor has been characterized using a Transient Current Technique based on the simultaneous absorption of two photons. This novel approach has allowed to resolve the DNW implant boundaries and therefore to accurately determine the real depleted volume and the effective doping concentration of the substrate. The unprecedented spatial resolution of this new method comes from the fact that measurable free carrier generation in two photon mode only occurs in a micrometric scale voxel around the focus of the beam. Real three-dimensional spatial resolution is achieved by scanning the beam focus within the sample.
Borrmann, Robin
2010-01-01
This article examines whether the use of Depleted Uranium (DU) munitions can be considered illegal under current public international law. The analysis covers the law of arms control and focuses in particular on international humanitarian law. The article argues that DU ammunition cannot be addressed adequately under existing treaty based weapon bans, such as the Chemical Weapons Convention, due to the fact that DU does not meet the criteria required to trigger the applicability of those treaties. Furthermore, it is argued that continuing uncertainties regarding the effects of DU munitions impedes a reliable review of the legality of their use under various principles of international law, including the prohibition on employing indiscriminate weapons; the prohibition on weapons that are intended, or may be expected, to cause widespread, long-term and severe damage to the natural environment; and the prohibition on causing unnecessary suffering or superfluous injury. All of these principles require complete knowledge of the effects of the weapon in question. Nevertheless, the author argues that the same uncertainty places restrictions on the use of DU under the precautionary principle. The paper concludes with an examination of whether or not there is a need for--and if so whether there is a possibility of achieving--a Convention that comprehensively outlaws the use, transfer and stockpiling of DU weapons, as proposed by some non-governmental organisations (NGOs).
Jalayer, Fatemeh; Ebrahimian, Hossein
2014-05-01
Introduction The first few days elapsed after the occurrence of a strong earthquake and in the presence of an ongoing aftershock sequence are quite critical for emergency decision-making purposes. Epidemic Type Aftershock Sequence (ETAS) models are used frequently for forecasting the spatio-temporal evolution of seismicity in the short-term (Ogata, 1988). The ETAS models are epidemic stochastic point process models in which every earthquake is a potential triggering event for subsequent earthquakes. The ETAS model parameters are usually calibrated a priori and based on a set of events that do not belong to the on-going seismic sequence (Marzocchi and Lombardi 2009). However, adaptive model parameter estimation, based on the events in the on-going sequence, may have several advantages such as, tuning the model to the specific sequence characteristics, and capturing possible variations in time of the model parameters. Simulation-based methods can be employed in order to provide a robust estimate for the spatio-temporal seismicity forecasts in a prescribed forecasting time interval (i.e., a day) within a post-main shock environment. This robust estimate takes into account the uncertainty in the model parameters expressed as the posterior joint probability distribution for the model parameters conditioned on the events that have already occurred (i.e., before the beginning of the forecasting interval) in the on-going seismic sequence. The Markov Chain Monte Carlo simulation scheme is used herein in order to sample directly from the posterior probability distribution for ETAS model parameters. Moreover, the sequence of events that is going to occur during the forecasting interval (and hence affecting the seismicity in an epidemic type model like ETAS) is also generated through a stochastic procedure. The procedure leads to two spatio-temporal outcomes: (1) the probability distribution for the forecasted number of events, and (2) the uncertainty in estimating the
Directory of Open Access Journals (Sweden)
J. D. Rösevall
2007-01-01
Full Text Available The objective of this study is to demonstrate how polar ozone depletion can be mapped and quantified by assimilating ozone data from satellites into the wind driven transport model DIAMOND, (Dynamical Isentropic Assimilation Model for OdiN Data. By assimilating a large set of satellite data into a transport model, ozone fields can be built up that are less noisy than the individual satellite ozone profiles. The transported fields can subsequently be compared to later sets of incoming satellite data so that the rates and geographical distribution of ozone depletion can be determined. By tracing the amounts of solar irradiation received by different air parcels in a transport model it is furthermore possible to study the photolytic reactions that destroy ozone. In this study, destruction of ozone that took place in the Antarctic winter of 2003 and in the Arctic winter of 2002/2003 have been examined by assimilating ozone data from the ENVISAT/MIPAS and Odin/SMR satellite-instruments. Large scale depletion of ozone was observed in the Antarctic polar vortex of 2003 when sunlight returned after the polar night. By mid October ENVISAT/MIPAS data indicate vortex ozone depletion in the ranges 80–100% and 70–90% on the 425 and 475 K potential temperature levels respectively while the Odin/SMR data indicates depletion in the ranges 70–90% and 50–70%. The discrepancy between the two instruments has been attributed to systematic errors in the Odin/SMR data. Assimilated fields of ENVISAT/MIPAS data indicate ozone depletion in the range 10–20% on the 475 K potential temperature level, (~19 km altitude, in the central regions of the 2002/2003 Arctic polar vortex. Assimilated fields of Odin/SMR data on the other hand indicate ozone depletion in the range 20–30%.
Directory of Open Access Journals (Sweden)
Kohei Arai
2013-04-01
Full Text Available Comparative study on linear and nonlinear mixed pixel models of which pixels in remote sensing satellite images is composed with plural ground cover materials mixed together, is conducted for remote sensing satellite image analysis. The mixed pixel models are based on Cierniewski of ground surface reflectance model. The comparative study is conducted by using of Monte Carlo Ray Tracing: MCRT simulations. Through simulation study, the difference between linear and nonlinear mixed pixel models is clarified. Also it is found that the simulation model is validated.
Proton Upset Monte Carlo Simulation
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Physics of Fully Depleted CCDs
Holland, S E; Kolbe, W F; Lee, J S
2014-01-01
In this work we present simple, physics-based models for two effects that have been noted in the fully depleted CCDs that are presently used in the Dark Energy Survey Camera. The first effect is the observation that the point-spread function increases slightly with the signal level. This is explained by considering the effect on charge-carrier diffusion due to the reduction in the magnitude of the channel potential as collected signal charge acts to partially neutralize the fixed charge in the depleted channel. The resulting reduced voltage drop across the carrier drift region decreases the vertical electric field and increases the carrier transit time. The second effect is the observation of low-level, concentric ring patterns seen in uniformly illuminated images. This effect is shown to be most likely due to lateral deflection of charge during the transit of the photogenerated carriers to the potential wells as a result of lateral electric fields. The lateral fields are a result of space charge in the fully...
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Buyukada, Musa
2017-02-01
The aim of present study is to investigate the thermogravimetric behaviour of the co-combustion of hazelnut hull (HH) and coal blends using three approaches: multi non-linear regression (MNLR) modeling based on Box-Behnken design (BBD) (1), optimization based on response surface methodology (RSM) (2), and probabilistic uncertainty analysis based on Monte Carlo simulation as a function of blend ratio, heating rate, and temperature (3). The response variable was predicted by the best-fit MNLR model with a predicted regression coefficient (R(2)pred) of 99.5%. Blend ratio of 90/10 (HH to coal, %wt), temperature of 405°C, and heating rate of 44°Cmin(-1) were determined as RSM-optimized conditions with a mass loss of 87.4%. The validation experiments with three replications were performed for justifying the predicted-mass loss percentage and 87.5%±0.2 of mass loss were obtained under RSM-optimized conditions. The probabilistic uncertainty analysis were performed by using Monte Carlo simulations.
Power distributions in fresh and depleted LEU and HEU cores of the MITR reactor.
Energy Technology Data Exchange (ETDEWEB)
Wilson, E.H.; Horelik, N.E.; Dunn, F.E.; Newton, T.H., Jr.; Hu, L.; Stevens, J.G. (Nuclear Engineering Division); (2MIT Nuclear Reactor Laboratory and Nuclear Science and Engineering Department)
2012-04-04
The Massachusetts Institute of Technology Reactor (MITR-II) is a research reactor in Cambridge, Massachusetts designed primarily for experiments using neutron beam and in-core irradiation facilities. It delivers a neutron flux comparable to current LWR power reactors in a compact 6 MW core using Highly Enriched Uranium (HEU) fuel. In the framework of its non-proliferation policies, the international community presently aims to minimize the amount of nuclear material available that could be used for nuclear weapons. In this geopolitical context, most research and test reactors both domestic and international have started a program of conversion to the use of Low Enriched Uranium (LEU) fuel. A new type of LEU fuel based on an alloy of uranium and molybdenum (UMo) is expected to allow the conversion of U.S. domestic high performance reactors like the MITR-II reactor. Toward this goal, core geometry and power distributions are presented. Distributions of power are calculated for LEU cores depleted with MCODE using an MCNP5 Monte Carlo model. The MCNP5 HEU and LEU MITR models were previously compared to experimental benchmark data for the MITR-II. This same model was used with a finer spatial depletion in order to generate power distributions for the LEU cores. The objective of this work is to generate and characterize a series of fresh and depleted core peak power distributions, and provide a thermal hydraulic evaluation of the geometry which should be considered for subsequent thermal hydraulic safety analyses.
Diffenderfer, Eric S; Dolney, Derek; Schaettler, Maximilian; Sanzari, Jenine K; McDonough, James; Cengel, Keith A
2014-03-01
The space radiation environment imposes increased dangers of exposure to ionizing radiation, particularly during a solar particle event (SPE). These events consist primarily of low energy protons that produce a highly inhomogeneous dose distribution. Due to this inherent dose heterogeneity, experiments designed to investigate the radiobiological effects of SPE radiation present difficulties in evaluating and interpreting dose to sensitive organs. To address this challenge, we used the Geant4 Monte Carlo simulation framework to develop dosimetry software that uses computed tomography (CT) images and provides radiation transport simulations incorporating all relevant physical interaction processes. We found that this simulation accurately predicts measured data in phantoms and can be applied to model dose in radiobiological experiments with animal models exposed to charged particle (electron and proton) beams. This study clearly demonstrates the value of Monte Carlo radiation transport methods for two critically interrelated uses: (i) determining the overall dose distribution and dose levels to specific organ systems for animal experiments with SPE-like radiation, and (ii) interpreting the effect of random and systematic variations in experimental variables (e.g. animal movement during long exposures) on the dose distributions and consequent biological effects from SPE-like radiation exposure. The software developed and validated in this study represents a critically important new tool that allows integration of computational and biological modeling for evaluating the biological outcomes of exposures to inhomogeneous SPE-like radiation dose distributions, and has potential applications for other environmental and therapeutic exposure simulations.
New Approach For Prediction Groundwater Depletion
Moustafa, Mahmoud
2017-01-01
Current approaches to quantify groundwater depletion involve water balance and satellite gravity. However, the water balance technique includes uncertain estimation of parameters such as evapotranspiration and runoff. The satellite method consumes time and effort. The work reported in this paper proposes using failure theory in a novel way to predict groundwater saturated thickness depletion. An important issue in the failure theory proposed is to determine the failure point (depletion case). The proposed technique uses depth of water as the net result of recharge/discharge processes in the aquifer to calculate remaining saturated thickness resulting from the applied pumping rates in an area to evaluate the groundwater depletion. Two parameters, the Weibull function and Bayes analysis were used to model and analyze collected data from 1962 to 2009. The proposed methodology was tested in a nonrenewable aquifer, with no recharge. Consequently, the continuous decline in water depth has been the main criterion used to estimate the depletion. The value of the proposed approach is to predict the probable effect of the current applied pumping rates on the saturated thickness based on the remaining saturated thickness data. The limitation of the suggested approach is that it assumes the applied management practices are constant during the prediction period. The study predicted that after 300 years there would be an 80% probability of the saturated aquifer which would be expected to be depleted. Lifetime or failure theory can give a simple alternative way to predict the remaining saturated thickness depletion with no time-consuming processes such as the sophisticated software required.
Bardenet, R.
2012-01-01
ISBN:978-2-7598-1032-1; International audience; Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretic...
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Depleted zinc: Properties, application, production.
Borisevich, V D; Pavlov, A V; Okhotina, I A
2009-01-01
The addition of ZnO, depleted in the Zn-64 isotope, to the water of boiling water nuclear reactors lessens the accumulation of Co-60 on the reactor interior surfaces, reduces radioactive wastes and increases the reactor service-life because of the inhibitory action of zinc on inter-granular stress corrosion cracking. To the same effect depleted zinc in the form of acetate dihydrate is used in pressurized water reactors. Gas centrifuge isotope separation method is applied for production of depleted zinc on the industrial scale. More than 20 years of depleted zinc application history demonstrates its benefits for reduction of NPP personnel radiation exposure and combating construction materials corrosion.
A GPU-based Large-scale Monte Carlo Simulation Method for Systems with Long-range Interactions
Liang, Yihao; Li, Yaohang
2016-01-01
In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures. It adopts the sequential updating scheme of Metropolis algorithm, and makes no approximation in the computation of energy. It reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We use this method to simulate primitive model electrolytes. We measure very precisely all ion-ion pair correlation functions at high concentrations, and extract renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.
Institute of Scientific and Technical Information of China (English)
Zhang Zhi-Dong; Chang Chun-Rui; Ma Dong-Lai
2009-01-01
Hybrid nematic films have been studied by Monte Carlo simulations using a lattice spin model,in which the pair potential is spatially anisotropic and dependent on elastic constants of liquid crystals.We confirm in the thin hybrid nematic film the existence of a biaxially nonbent structure and the structarc transition from the biaxial to the bent-director structure,which is similar to the result obtained using the Lebwohl-Lasher model.However,the step-like director's profile,characteristic for the biaxial structure,is spatially asymmetric in the film because the pair potential leads to K1≠K3.We estimate the upper cell thickness to be 69 spin layers,in which the biaxial structure can be found.
Monte Carlo simulation on kinetic behavior of one-pot hyperbranched polymerization based on AA*+CB2
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
Monte Carlo simulation was applied to investigate the kinetic behavior of AA*+CB2 system.The algorithm consisted of two procedures to simulate the in-situ synthesis of AB2-like intermediate and the subsequent polymerization,respectively.In order to improve the accuracy of the prediction,the mobility distinction between different scale molecules in polymerization was taken into account by relating the reaction rate constants to the collision possibility of each pair of species.The feed ratio of initial monomers and the activity difference between the two functional groups within AA* were studied systematically to catch the essential features of the reaction.Simulation results have revealed that the achievable maximum conversion primarily depends on the extent of the reactivity difference between A and A*-groups,and it is suggested that A*-group should be at least 10 times more active than A-group to achieve high number-average degree of polymerization.
Zagrebin, M. A.; Sokolovskiy, V. V.; Buchelnikov, V. D.
2016-09-01
Structural, magnetic and electronic properties of stoichiometric Co2 YZ Heusler alloys (Y = Cr, Fe, Mn and Z = Al, Si, Ge) have been studied by means of ab initio calculations and Monte Carlo simulations. The investigations were performed in dependence on different levels of approximations in DFT (FP and ASA modes, as well as GGA and GGA + U schemes) and external pressure. It is shown that in the case of the GGA scheme the half-metallic behavior is clearly observed for compounds containing Cr and Mn transition metals, while Co2FeZ alloys demonstrate the pseudo half-metallic behavior. It is demonstrated that an applied pressure and an account of Coulomb repulsion (U) lead to the stabilization of the half-metallic nature for Co2 YZ alloys. The strongest ferromagnetic inter-sublattice (Co-Y) interactions together with intra-sublattice (Co-Co and Y-Y) interactions explain the high values of the Curie temperature obtained by Monte Carlo simulations using the Heisenberg model. It is observed that a decrease in valence electrons of Y atoms (i.e. Fe substitution by Mn and Cr) leads to the weakening of the exchange interactions and to the reduction of the Curie temperature. Besides, in the case of the FP mode Curie temperatures were found in a good agreement with available experimental and theoretical data, where the latter were obtained by applying the empirical relation between the Curie temperature and the total magnetic moment.
基于GPU的蒙特卡洛放疗剂量并行计算%GPU-based Parallel Monte Carlo Simulation for Radiotherapy Dose Calculation
Institute of Scientific and Technical Information of China (English)
甘旸谷; 黄斐增
2012-01-01
目的:蒙特卡洛模拟在放疗剂量计算领域被广泛视为最精确的计算方法,但对于日常的临床应用,其效率仍有较大提升需求和空间.方法:本文会呈现放疗剂量计算领域的最新成果-维持相同的粒子输运原理的同时,使用CUDA语言,利用显卡的GPU(Graphic Processing Unit)并行处理蒙特卡洛计算中的主要过程,计算光子剂量沉积.这样既可以保证不失去蒙卡模拟的精度,又可以极大地提高运算速度.结果:实践表明在使用NVIDIA GTX460 1G DDR5 plus INTEL i52300的硬件设备,在GPU上并行计算蒙特卡洛放疗剂量沉积时,计算100万个光子剂量沉积时加速因子达到116.6,处理1000万光子入射,加速因子可达127.5.结论:本文中利用显卡GPU运行CUDA语言对放疗剂量计算进行模拟,是一种可以大幅有效提高剂量计算效率方法.%Objective: Monte Carlo simulation is commonly considered to be the most accurate dose calculation method in radiotherapy. However, its efficiency still requires improvement for many routine clinical applications.Methods: This paper will present recent progresses in GPU-based Monte Carlo dose calculation. We utilizes the parallel computation ability of a GPU to achieve high efficiency, while maintaining the same particle transport physics as in the original Monte Carlo simulation code and therefore obtains the same level of simulation accuracy. Results: Our research results show that using an NVIDIA GTX460 GPU card against an INTEL i5 2300 in computing a one-million sample with all 336 processor cores working together,speed-up factors can be as high as 116.6,as for a ten-million situation,even obtain a result as high as 127.5. Conclusions:Using GPU and CUDA to process a Monte Carlo simulation can highly improve the efficiency of dose calculation.
A Novel Depletion-Mode MOS Gated Emitter Shorted Thyristor
Institute of Scientific and Technical Information of China (English)
张鹤鸣; 戴显英; 张义门; 马晓华; 林大松
2000-01-01
A Novel MOS-gated thyristor, depletion-mode MOS gated emitter shorted thyristor (DMST),and its two structures are proposed. In DMST,the channel of depletion-mode MOS makes the thyristor emitter-based junction inherently short. The operation of the device is controlled by the interruption and recovery of the depletion-mode MOS P channel. The perfect properties have been demonstrated by 2-D numerical simulations and the tests on the fabricated chips.
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
Kleiss, R H
2006-01-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
,
2015-01-01
We present a sophisticated likelihood reconstruction algorithm for shower-image analysis of imaging Cherenkov telescopes. The reconstruction algorithm is based on the comparison of the camera pixel amplitudes with the predictions from a Monte Carlo based model. Shower parameters are determined by a maximisation of a likelihood function. Maximisation of the likelihood as a function of shower fit parameters is performed using a numerical non-linear optimisation technique. A related reconstruction technique has already been developed by the CAT and the H.E.S.S. experiments, and provides a more precise direction and energy reconstruction of the photon induced shower compared to the second moment of the camera image analysis. Examples are shown of the performance of the analysis on simulated gamma-ray data from the VERITAS array.
DEFF Research Database (Denmark)
Holm, Bent
2005-01-01
En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità.......En indplacering af operahuset San Carlo i en kulturhistorisk repræsentationskontekst med særligt henblik på begrebet napolalità....
ROESSEL, ROBERT A., JR.
THE FIRST SECTION OF THIS BOOK COVERS THE HISTORICAL AND CULTURAL BACKGROUND OF THE SAN CARLOS APACHE INDIANS, AS WELL AS AN HISTORICAL SKETCH OF THE DEVELOPMENT OF THEIR FORMAL EDUCATIONAL SYSTEM. THE SECOND SECTION IS DEVOTED TO THE PROBLEMS OF TEACHERS OF THE INDIAN CHILDREN IN GLOBE AND SAN CARLOS, ARIZONA. IT IS DIVIDED INTO THREE PARTS--(1)…
Ego depletion impairs implicit learning.
Thompson, Kelsey R; Sanchez, Daniel J; Wesley, Abigail H; Reber, Paul J
2014-01-01
Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing) can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL) task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent.
Depletable resources and the economy.
Heijman, W.J.M.
1991-01-01
The subject of this thesis is the depletion of scarce resources. The main question to be answered is how to avoid future resource crises. After dealing with the complex relation between nature and economics, three important concepts in relation with resource depletion are discussed: steady state, ti
Maiti, Saumen; Tiwari, R. K.
2009-11-01
Identification of rock boundaries and structural features from well log response is a fundamental problem in geological field studies. However, in a complex geologic situation, such as in the presence of crystalline rocks where metamorphisms lead to facies changes, it is not easy to discern accurate information from well log data using conventional artificial neural network (ANN) methods. Moreover inferences drawn by such methods are also found to be ambiguous because of the strong overlapping of well log signals, which are generally tainted with deceptive noise. Here, we have developed an alternative ANN approach based on Bayesian statistics using the concept of Hybrid Monte Carlo (HMC)/Markov Chain Monte Carlo (MCMC) inversion scheme for modeling the German Continental Deep Drilling Program (KTB) well log data. MCMC algorithm draws an independent and identically distributed (i.i.d) sample by Markov Chain simulation technique from posterior probability distribution using the principle of statistical mechanics in Hamiltonian dynamics. In this algorithm, each trajectory is updated by approximating the Hamiltonian differential equations through a leapfrog discrimination scheme. We examined the stability and efficiency of the HMC-based approach on “noisy” data assorted with different levels of colored noise. We also perform uncertainty analysis by estimating standard deviation (STD) error map of a posteriori covariance matrix at the network output of three types of lithofacies over the entire length of the litho section of KTB. Our analyses demonstrate that the HMC-based approach renders robust means for classification of complex lithofacies successions from the KTB borehole noisy signals, and hence may provide a useful guide for understanding the crustal inhomogeneity and structural discontinuity in many other tectonically critical and complex regions.
Energy Technology Data Exchange (ETDEWEB)
Chen, X; Xing, L; Luxton, G; Bush, K [Stanford University, Palo Alto, CA (United States); Azcona, J [Clinica Universidad de Navarra, Pamplona (Spain)
2014-06-01
Purpose: Patient-specific QA for VMAT is incapable of providing full 3D dosimetric information and is labor intensive in the case of severe heterogeneities or small-aperture beams. A cloud-based Monte Carlo dose reconstruction method described here can perform the evaluation in entire 3D space and rapidly reveal the source of discrepancies between measured and planned dose. Methods: This QA technique consists of two integral parts: measurement using a phantom containing array of dosimeters, and a cloud-based voxel Monte Carlo algorithm (cVMC). After a VMAT plan was approved by a physician, a dose verification plan was created and delivered to the phantom using our Varian Trilogy or TrueBeam system. Actual delivery parameters (i.e., dose fraction, gantry angle, and MLC at control points) were extracted from Dynalog or trajectory files. Based on the delivery parameters, the 3D dose distribution in the phantom containing detector were recomputed using Eclipse dose calculation algorithms (AAA and AXB) and cVMC. Comparison and Gamma analysis is then conducted to evaluate the agreement between measured, recomputed, and planned dose distributions. To test the robustness of this method, we examined several representative VMAT treatments. Results: (1) The accuracy of cVMC dose calculation was validated via comparative studies. For cases that succeeded the patient specific QAs using commercial dosimetry systems such as Delta- 4, MAPCheck, and PTW Seven29 array, agreement between cVMC-recomputed, Eclipse-planned and measured doses was obtained with >90% of the points satisfying the 3%-and-3mm gamma index criteria. (2) The cVMC method incorporating Dynalog files was effective to reveal the root causes of the dosimetric discrepancies between Eclipse-planned and measured doses and provide a basis for solutions. Conclusion: The proposed method offers a highly robust and streamlined patient specific QA tool and provides a feasible solution for the rapidly increasing use of VMAT
Directory of Open Access Journals (Sweden)
J. Tonttila
2013-08-01
Full Text Available A new method for parameterizing the subgrid variations of vertical velocity and cloud droplet number concentration (CDNC is presented for general circulation models (GCMs. These parameterizations build on top of existing parameterizations that create stochastic subgrid cloud columns inside the GCM grid cells, which can be employed by the Monte Carlo independent column approximation approach for radiative transfer. The new model version adds a description for vertical velocity in individual subgrid columns, which can be used to compute cloud activation and the subgrid distribution of the number of cloud droplets explicitly. Autoconversion is also treated explicitly in the subcolumn space. This provides a consistent way of simulating the cloud radiative effects with two-moment cloud microphysical properties defined at subgrid scale. The primary impact of the new parameterizations is to decrease the CDNC over polluted continents, while over the oceans the impact is smaller. Moreover, the lower CDNC induces a stronger autoconversion of cloud water to rain. The strongest reduction in CDNC and cloud water content over the continental areas promotes weaker shortwave cloud radiative effects (SW CREs even after retuning the model. However, compared to the reference simulation, a slightly stronger SW CRE is seen e.g. over mid-latitude oceans, where CDNC remains similar to the reference simulation, and the in-cloud liquid water content is slightly increased after retuning the model.
Contrasts between Antarctic and Arctic ozone depletion.
Solomon, Susan; Portmann, Robert W; Thompson, David W J
2007-01-09
This work surveys the depth and character of ozone depletion in the Antarctic and Arctic using available long balloon-borne and ground-based records that cover multiple decades from ground-based sites. Such data reveal changes in the range of ozone values including the extremes observed as polar air passes over the stations. Antarctic ozone observations reveal widespread and massive local depletion in the heart of the ozone "hole" region near 18 km, frequently exceeding 90%. Although some ozone losses are apparent in the Arctic during particular years, the depth of the ozone losses in the Arctic are considerably smaller, and their occurrence is far less frequent. Many Antarctic total integrated column ozone observations in spring since approximately the 1980s show values considerably below those ever observed in earlier decades. For the Arctic, there is evidence of some spring season depletion of total ozone at particular stations, but the changes are much less pronounced compared with the range of past data. Thus, the observations demonstrate that the widespread and deep ozone depletion that characterizes the Antarctic ozone hole is a unique feature on the planet.
Energy Technology Data Exchange (ETDEWEB)
Baba, Justin S [ORNL; Koju, Vijay [ORNL; John, Dwayne O [ORNL
2016-01-01
The modulation of the state of polarization of photons due to scatter generates associated geometric phase that is being investigated as a means for decreasing the degree of uncertainty in back-projecting the paths traversed by photons detected in backscattered geometry. In our previous work, we established that polarimetrically detected Berry phase correlates with the mean photon penetration depth of the backscattered photons collected for image formation. In this work, we report on the impact of state-of-linear-polarization (SOLP) filtering on both the magnitude and population distributions of image forming detected photons as a function of the absorption coefficient of the scattering sample. The results, based on Berry phase tracking implemented Polarized Monte Carlo Code, indicate that sample absorption plays a significant role in the mean depth attained by the image forming backscattered detected photons.
Lazos, Dimitrios; Pokhrel, Damodar; Su, Zhong; Lu, Jun; Williamson, Jeffrey F.
2008-03-01
Fast and accurate modeling of cone-beam CT (CBCT) x-ray projection data can improve CBCT image quality either by linearizing projection data for each patient prior to image reconstruction (thereby mitigating detector blur/lag, spectral hardening, and scatter artifacts) or indirectly by supporting rigorous comparative simulation studies of competing image reconstruction and processing algorithms. In this study, we compare Monte Carlo-computed x-ray projections with projections experimentally acquired from our Varian Trilogy CBCT imaging system for phantoms of known design. Our recently developed Monte Carlo photon-transport code, PTRAN, was used to compute primary and scatter projections for cylindrical phantom of known diameter (NA model 76-410) with and without bow-tie filter and antiscatter grid for both full- and half-fan geometries. These simulations were based upon measured 120 kVp spectra, beam profiles, and flat-panel detector (4030CB) point-spread function. Compound Poisson- process noise was simulated based upon measured beam output. Computed projections were compared to flat- and dark-field corrected 4030CB images where scatter profiles were estimated by subtracting narrow axial-from full axial width 4030CB profiles. In agreement with the literature, the difference between simulated and measured projection data is of the order of 6-8%. The measurement of the scatter profiles is affected by the long tails of the detector PSF. Higher accuracy can be achieved mainly by improving the beam modeling and correcting the non linearities induced by the detector PSF.
Energy Technology Data Exchange (ETDEWEB)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
Energy Technology Data Exchange (ETDEWEB)
Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov [Division of Imaging, Diagnostics, and Software Reliability, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland 20993 (United States)
2014-12-15
Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying
Jia, Zhen-Yi; Xia, Yang; Tong, Danian; Yao, Jing; Chen, Hong-Qi; Yang, Jun
2014-06-01
Complex communities of microorganisms play important roles in human health, and alterations in the intestinal microbiota may induce intestinal inflammation and numerous diseases. The purpose of this study was to identify the key genes and processes affected by depletion of the intestinal microbiota in a murine model. The Affymetrix microarray dataset GSE22648 was downloaded from the Gene Expression Omnibus database, and differentially expressed genes (DEGs) were identified using the limma package in R. A protein-protein interaction (PPI) network was constructed for the DEGs using the Cytoscape software, and the network was divided into several modules using the MCODE plugin. Furthermore, the modules were functionally annotated using the PiNGO plugin, and DEG-related pathways were retrieved and analyzed using the GenMAPP software. A total of 53 DEGs were identified, of which 26 were upregulated and 27 were downregulated. The PPI network of these DEGs comprised 3 modules. The most significant module-related DEGs were the cytochrome P450 (CYP) 4B1 isozyme gene (CYP4B1) in module 1, CYP4F14 in module 2 and the tachykinin precursor 1 gene (TAC1) in module 3. The majority of enriched pathways of module 1 and 2 were oxidation reduction pathways (metabolism of xenobiotics by CYPs) and lipid metabolism-related pathways, including linoleic acid and arachidonic acid metabolism. The neuropeptide signaling pathway was the most significantly enriched functional pathway of module 3. In conclusion, our findings strongly suggest that intestinal microbiota depletion affects cellular metabolism and oxidation reduction pathways. In addition, this is the first time, to the best of our knowledge, that the neuropeptide signaling pathway is reported to be affected by intestinal microbiota depletion in mice. The present study provides a list of candidate genes and processes related to the interaction of microbiota with the intestinal tract.
You, Jong-Bum; Park, Miran; Park, Jeong-Woo; Kim, Gyungock
2008-10-27
We present a high speed optical modulation using carrier depletion effect in an asymmetric silicon p-n diode resonator. To optimize coupling efficiency and reduce bending loss, two-step-etched waveguide is used in the racetrack resonator with a directional coupler. The quality factor of the resonator with a circumference of 260 um is 9,482, and the DC on/off ratio is 8 dB at -12V. The device shows the 3dB bandwidth of approximately8 GHz and the data transmission up to 12.5Gbit/s.
基于时序蒙特卡洛的WSN节点定位算法%Node Localization Algorithm for WSN Based on Time Sequence Monte Carlo
Institute of Scientific and Technical Information of China (English)
田浩杉; 李翠然; 谢健骊; 梁樱馨
2016-01-01
A new node localization algorithm is proposed for WSN(Wireless Sensor Network),which combines the feedback time series with the Monte Carlo localization,named as TSMCL. The possible initial sampling region of the target node,R1 is determined by the feedback time order from no less than three anchor nodes,which are the one-hop neighbors of the target node. In order to shrink the sampling region and improve the sampling efficiency, the final sampling region R is limited to the overlapped area of R1 and Monte Carlo sampling region(R2). Simulation results demonstrate that compared with Monte Carlo localization,the proposed TSMCL algorithm reduce the posi⁃tioning error up to 38%or so,and meanwhile,the convergence speed is improved significantly,especially for high mobility.%针对无线传感器网络（WSN）中的移动节点定位问题，提出了一种将反馈时间序列与蒙特卡洛相结合的定位算法TSMCL（Feedback Time Series-Based Monte Carlo）。该算法基于目标节点1跳范围内的邻居锚节点（至少3个）反馈信号的先后顺序，构建了节点可能的初始采样区域R1，并以区域R1与蒙特卡洛采样区域R2的重叠区作为新的采样区域R，以进一步缩小采样范围、提高采样效率。仿真结果表明：与蒙特卡洛定位算法相比，提出的TSMCL算法能够减少约38%的定位误差，尤其当节点移动速度较高时，算法的收敛速度也得到了显著提升。
MCNPX Monte Carlo burnup simulations of the isotope correlation experiments in the NPP Obrigheim
Energy Technology Data Exchange (ETDEWEB)
Cao Yan, E-mail: ycao@anl.go [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Gohar, Yousry [Nuclear Engineering Division, Argonne National Laboratory, 9700 South Cass Avenue, Argonne, IL 60439 (United States); Broeders, Cornelis H.M. [Forschungszentrum Karlsruhe, Institute for Neutron Physics and Reactor Technology, P.O. Box 3640, 76021 Karlsruhe (Germany)
2010-10-15
This paper describes the simulation work of the Isotope Correlation Experiment (ICE) using the MCNPX Monte Carlo computer code package. The Monte Carlo simulation results are compared with the ICE-Experimental measurements for burnup up to 30 GWD/t. The comparison shows the good capabilities of the MCNPX computer code package for predicting the depletion of the uranium fuel and the buildup of the plutonium isotopes in a PWR thermal reactor. The Monte Carlo simulation results show also good agreements with the experimental data for calculating several long-lived and stable fission products. However, for the americium and curium actinides, it is difficult to judge the predication capabilities for these actinides due to the large uncertainties in the ICE-Experimental data. In the MCNPX numerical simulations, a pin cell model is utilized to simulate the fuel lattice of the nuclear power reactor. Temperature dependent libraries based on JEFF3.1 nuclear data files are utilized for the calculations. In addition, temperature dependent libraries based ENDF/B-VII nuclear data files are utilized and the obtained results are very close to the JEFF3.1 results, except for {approx}10% differences in the prediction of the minor actinide isotopes buildup.
Enhancements in Continuous-Energy Monte Carlo Capabilities for SCALE 6.2
Energy Technology Data Exchange (ETDEWEB)
Rearden, Bradley T [ORNL; Petrie Jr, Lester M [ORNL; Peplow, Douglas E. [ORNL; Bekar, Kursat B [ORNL; Wiarda, Dorothea [ORNL; Celik, Cihangir [ORNL; Perfetti, Christopher M [ORNL; Dunn, Michael E [ORNL
2014-01-01
SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, industry, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a plug-and-play framework that includes three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 provides several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, sensitivity and uncertainty analysis, and improved fidelity in nuclear data libraries. A brief overview of SCALE capabilities is provided with emphasis on new features for SCALE 6.2.
基于蒙特卡洛法的弹着点散布仿真%Simulation of Impact Position Based on Monte-Carlo Method
Institute of Scientific and Technical Information of China (English)
路航; 石全; 胡起伟; 朱战飞
2011-01-01
Simulation of impact position is an important step of the damage simulation. The impact position model is set up by Monte-Carlo method based on analyzing the components and calculating methods of firing error, and the visual simulation of impact position which is codetermined by concentrated fire, optimum width fire and three-distance fire is realized. The results accord with tactics of artillery, therefore the model can be used in damage simulatioa The case of artillery battalion to position of towed gun company is analyzed, the results of which shows that it is facility, credible and general to calculate damage probability of any point targets using the impact model.%弹着点散布仿真是对敌火力打击目标毁伤仿真的重要环节.在分析地面炮兵射击误差构成与计算方法的基础上,研究建立了炮兵射击的弹着点散布蒙特卡洛(Monte-Carlo)仿真模型,实现了对集火射向、适宽射向、三距离射击等多种火力打击方式共同作用下的弹着点散布的可视化仿真,仿真结果符合相关战术数据,为目标毁伤仿真研究提供了模型支持.以炮兵营对牵引炮兵连阵地射击为例进行了算例分析,结果表明,采用弹着点散布的蒙特卡洛仿真模型计算对任意形状点目标群的毁伤效能方便可靠且通用性强.
Quantum Monte Carlo simulation
Wang, Yazhen
2011-01-01
Contemporary scientific studies often rely on the understanding of complex quantum systems via computer simulation. This paper initiates the statistical study of quantum simulation and proposes a Monte Carlo method for estimating analytically intractable quantities. We derive the bias and variance for the proposed Monte Carlo quantum simulation estimator and establish the asymptotic theory for the estimator. The theory is used to design a computational scheme for minimizing the mean square er...
Monte Carlo transition probabilities
Lucy, L. B.
2001-01-01
Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...
Energy Technology Data Exchange (ETDEWEB)
Petrizzi, L.; Batistoni, P.; Migliori, S. [Associazione EURATOM ENEA sulla Fusione, Frascati (Roma) (Italy); Chen, Y.; Fischer, U.; Pereslavtsev, P. [Association FZK-EURATOM Forschungszentrum Karlsruhe (Germany); Loughlin, M. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Secco, A. [Nice Srl Via Serra 33 Camerano Casasco AT (Italy)
2003-07-01
In deuterium-deuterium (D-D) and deuterium-tritium (D-T) fusion plasmas neutrons are produced causing activation of JET machine components. For safe operation and maintenance it is important to be able to predict the induced activation and the resulting shut down dose rates. This requires a suitable system of codes which is capable of simulating both the neutron induced material activation during operation and the decay gamma radiation transport after shut-down in the proper 3-D geometry. Two methodologies to calculate the dose rate in fusion devices have been developed recently and applied to fusion machines, both using the MCNP Monte Carlo code. FZK has developed a more classical approach, the rigorous 2-step (R2S) system in which MCNP is coupled to the FISPACT inventory code with an automated routing. ENEA, in collaboration with the ITER Team, has developed an alternative approach, the direct 1 step method (D1S). Neutron and decay gamma transport are handled in one single MCNP run, using an ad hoc cross section library. The intention was to tightly couple the neutron induced production of a radio-isotope and the emission of its decay gammas for an accurate spatial distribution and a reliable calculated statistical error. The two methods have been used by the two Associations to calculate the dose rate in five positions of JET machine, two inside the vacuum chamber and three outside, at cooling times between 1 second and 1 year after shutdown. The same MCNP model and irradiation conditions have been assumed. The exercise has been proposed and financed in the frame of the Fusion Technological Program of the JET machine. The scope is to supply the designers with the most reliable tool and data to calculate the dose rate on fusion machines. Results showed that there is a good agreement: the differences range between 5-35%. The next step to be considered in 2003 will be an exercise in which the comparison will be done with dose-rate data from JET taken during and
Zhang, Shixun; Yamagia, Shinichi; Yunoki, Seiji
2013-08-01
Models of fermions interacting with classical degrees of freedom are applied to a large variety of systems in condensed matter physics. For this class of models, Weiße [Phys. Rev. Lett. 102, 150604 (2009)] has recently proposed a very efficient numerical method, called O(N) Green-Function-Based Monte Carlo (GFMC) method, where a kernel polynomial expansion technique is used to avoid the full numerical diagonalization of the fermion Hamiltonian matrix of size N, which usually costs O(N3) computational complexity. Motivated by this background, in this paper we apply the GFMC method to the double exchange model in three spatial dimensions. We mainly focus on the implementation of GFMC method using both MPI on a CPU-based cluster and Nvidia's Compute Unified Device Architecture (CUDA) programming techniques on a GPU-based (Graphics Processing Unit based) cluster. The time complexity of the algorithm and the parallel implementation details on the clusters are discussed. We also show the performance scaling for increasing Hamiltonian matrix size and increasing number of nodes, respectively. The performance evaluation indicates that for a 323 Hamiltonian a single GPU shows higher performance equivalent to more than 30 CPU cores parallelized using MPI.
Running Out Of and Into Oil. Analyzing Global Oil Depletion and Transition Through 2050
Energy Technology Data Exchange (ETDEWEB)
Greene, David L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hopson, Janet L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Li, Jia [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2003-10-01
This report presents a risk analysis of world conventional oil resource production, depletion, expansion, and a possible transition to unconventional oil resources such as oil sands, heavy oil and shale oil over the period 2000 to 2050. Risk analysis uses Monte Carlo simulation methods to produce a probability distribution of outcomes rather than a single value.
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Dou, Tai H.; Min, Yugang; Neylon, John; Thomas, David; Kupelian, Patrick; Santhanam, Anand P.
2016-03-01
Deformable image registration (DIR) is an important step in radiotherapy treatment planning. An optimal input registration parameter set is critical to achieve the best registration performance with the specific algorithm. Methods In this paper, we investigated a parameter optimization strategy for Optical-flow based DIR of the 4DCT lung anatomy. A novel fast simulated annealing with adaptive Monte Carlo sampling algorithm (FSA-AMC) was investigated for solving the complex non-convex parameter optimization problem. The metric for registration error for a given parameter set was computed using landmark-based mean target registration error (mTRE) between a given volumetric image pair. To reduce the computational time in the parameter optimization process, a GPU based 3D dense optical-flow algorithm was employed for registering the lung volumes. Numerical analyses on the parameter optimization for the DIR were performed using 4DCT datasets generated with breathing motion models and open-source 4DCT datasets. Results showed that the proposed method efficiently estimated the optimum parameters for optical-flow and closely matched the best registration parameters obtained using an exhaustive parameter search method.
Directory of Open Access Journals (Sweden)
P. Li
2013-01-01
Full Text Available The growth of global population and economy continually increases the waste volumes and consequently creates challenges to handle and dispose solid wastes. It becomes more challenging in mixed rural-urban areas (i.e., areas of mixed land use for rural and urban purposes where both agricultural waste (e.g., manure and municipal solid waste are generated. The efficiency and confidence of decisions in current management practices significantly rely on the accurate information and subjective judgments, which are usually compromised by uncertainties. This study proposed a resource-oriented solid waste management system for mixed rural-urban areas. The system is featured by a novel Monte Carlo simulation-based fuzzy programming approach. The developed system was tested by a real-world case with consideration of various resource-oriented treatment technologies and the associated uncertainties. The modeling results indicated that the community-based bio-coal and household-based CH4 facilities were necessary and would become predominant in the waste management system. The 95% confidence intervals of waste loadings to the CH4 and bio-coal facilities were 387, 450 and 178, 215 tonne/day (mixed flow, respectively. In general, the developed system has high capability in supporting solid waste management for mixed rural-urban areas in a cost-efficient and sustainable manner under uncertainty.
Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.
2014-06-01
For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.
Depleting depletion: Polymer swelling in poor solvent mixtures
Mukherji, Debashish; Marques, Carlos; Stuehn, Torsten; Kremer, Kurt
A polymer collapses in a solvent when the solvent particles dislike monomers more than the repulsion between monomers. This leads to an effective attraction between monomers, also referred to as depletion induced attraction. This attraction is the key factor behind standard polymer collapse in poor solvents. Strikingly, even if a polymer exhibits poor solvent condition in two different solvents, it can also swell in mixtures of these two poor solvents. This collapse-swelling-collapse scenario is displayed by poly(methyl methacrylate) (PMMA) in aqueous alcohol. Using molecular dynamics simulations of a thermodynamically consistent generic model and theoretical arguments, we unveil the microscopic origin of this phenomenon. Our analysis suggests that a subtle interplay of the bulk solution properties and the local depletion forces reduces depletion effects, thus dictating polymer swelling in poor solvent mixtures.
Smart detectors for Monte Carlo radiative transfer
Baes, Maarten
2008-01-01
Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Mainhagu, J.; Brusseau, M. L.
2016-09-01
The mass of contaminant present at a site, particularly in the source zones, is one of the key parameters for assessing the risk posed by contaminated sites, and for setting and evaluating remediation goals and objectives. This quantity is rarely known and is challenging to estimate accurately. This work investigated the efficacy of fitting mass-depletion functions to temporal contaminant mass discharge (CMD) data as a means of estimating initial mass. Two common mass-depletion functions, exponential and power functions, were applied to historic soil vapor extraction (SVE) CMD data collected from 11 contaminated sites for which the SVE operations are considered to be at or close to essentially complete mass removal. The functions were applied to the entire available data set for each site, as well as to the early-time data (the initial 1/3 of the data available). Additionally, a complete differential-time analysis was conducted. The latter two analyses were conducted to investigate the impact of limited data on method performance, given that the primary mode of application would be to use the method during the early stages of a remediation effort. The estimated initial masses were compared to the total masses removed for the SVE operations. The mass estimates obtained from application to the full data sets were reasonably similar to the measured masses removed for both functions (13 and 15% mean error). The use of the early-time data resulted in a minimally higher variation for the exponential function (17%) but a much higher error (51%) for the power function. These results suggest that the method can produce reasonable estimates of initial mass useful for planning and assessing remediation efforts.
2009-01-01
Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...
Tseung, H Wan Chan; Kreofsky, C R; Ma, D; Beltran, C
2016-01-01
Purpose: To demonstrate the feasibility of fast Monte Carlo (MC) based inverse biological planning for the treatment of head and neck tumors in spot-scanning proton therapy. Methods: Recently, a fast and accurate Graphics Processor Unit (GPU)-based MC simulation of proton transport was developed and used as the dose calculation engine in a GPU-accelerated IMPT optimizer. Besides dose, the dose-averaged linear energy transfer (LETd) can be simultaneously scored, which makes biological dose (BD) optimization possible. To convert from LETd to BD, a linear relation was assumed. Using this novel optimizer, inverse biological planning was applied to 4 patients: 2 small and 1 large thyroid tumor targets, and 1 glioma case. To create these plans, constraints were placed to maintain the physical dose (PD) within 1.25 times the prescription while maximizing target BD. For comparison, conventional IMRT and IMPT plans were created for each case in Eclipse (Varian, Inc). The same critical structure PD constraints were use...
Energy Technology Data Exchange (ETDEWEB)
Craig Kruschwitz, Ming Wu, Ken Moy, Greg Rochau
2008-10-31
We present here results of continued efforts to understand the performance of microchannel plate (MCP)–based, high-speed, gated, x-ray detectors. This work involves the continued improvement of a Monte Carlo simulation code to describe MCP performance coupled with experimental efforts to better characterize such detectors. Our goal is a quantitative description of MCP saturation behavior in both static and pulsed modes. We have developed a new model of charge buildup on the walls of the MCP channels and measured its effect on MCP gain. The results are compared to experimental data obtained with a short-pulse, high-intensity ultraviolet laser; these results clearly demonstrate MCP saturation behavior in both DC and pulsed modes. The simulations compare favorably to the experimental results. The dynamic range of the detectors in pulsed operation is of particular interest when fielding an MCP–based camera. By adjusting the laser flux we study the linear range of the camera. These results, too, are compared to our simulations.
Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.
2017-03-01
Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10–20% the Day30 urethra D10 dose metric is higher by 4.2%–10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.
Babilas, Rafał; Mariola, Kądziołka-Gaweł; Burian, Andrzej; Temleitner, László
2016-05-01
Selected soft magnetic amorphous alloys Fe80B20, Fe70Nb10B20 and Fe62Nb8B30 were produced by the melt-spinning and characterized by X-ray diffraction (XRD), transmission Mössbauer spectroscopy (MS), Reverse Monte Carlo modeling (RMC) and relative magnetic permeability measurements. The Mössbauer spectroscopy allowed to study the local environments of the Fe-centered atoms in the amorphous structure of binary and ternary glassy alloys. The MS provided also information about the changes in the amorphous structure due to the modification of chemical composition by various boron and niobium content. The RMC simulation based on the structure factors determined by synchrotron XRD measurements was also used in modeling of the atomic arrangements and short-range order in Fe-based model alloys. Addition of boron and niobium in the ternary model alloys affected the disorder in as-cast state and also influenced on the number of nearest neighbor Fe-Fe atoms, consequently. The distributions of Fe- and B-centered coordination numbers showed that N=10, 9 and 8 are dominated around Fe atoms and N=9, 8 and 7 had the largest population around B atoms in the examined amorphous alloys. Moreover, the relationship between the content of the alloying elements, the local atomic ordering and the magnetic permeability (magnetic after-effects) was mentioned.
Analysis of Investment Risk Based on Monte Carlo Method%蒙特卡洛法在投资项目风险分析中的应用
Institute of Scientific and Technical Information of China (English)
王霞; 张本涛; 马庆
2011-01-01
本文以经济净现值为评价指标来度量项目的投资风险,确定各影响因素的概率分布,建立了基于三角分布的风险评价的随机模型,采用蒙特卡罗方法进行模拟,利用MATLAB编程实现,得到投资项目的净现值频数分布的直方图和累计频率曲线图,并对模拟结果进行统计和分析,可得到净现值的平均预测值以及风险率,为评价投资项目的风险提供了理论依据.%In this paper, based on the important economic evaluation index NPV, the paper measures the risk of investment projects, determines the probability distribution of various factors, establishes the risk evaluation of the stochastic model based on the triangular distribution, which is simulated using Monte Carlo method, and realises by MATLAB programming, then can get the frequency distribution histograms and cumulative frequency curve of the net present value of investment projects, predictive average value and the rate risk are obtained by statistic analysis,providing a theoretical basis for risk evaluation of investment projects.
Zhang, Rong; Verkruysse, Wim; Aguilar, Guillermo; Nelson, J Stuart
2005-09-07
Both diffusion approximation (DA) and Monte Carlo (MC) models have been used to simulate light distribution in multilayered human skin with or without discrete blood vessels. However, no detailed comparison of the light distribution, heat generation and induced thermal damage between these two models has been done for discrete vessels. Three models were constructed: (1) MC-based finite element method (FEM) model, referred to as MC-FEM; (2) DA-based FEM with simple scaling factors according to chromophore concentrations (SFCC) in the epidermis and vessels, referred to as DA-FEM-SFCC; and (3) DA-FEM with improved scaling factors (ISF) obtained by equalizing the total light energy depositions that are solved from the DA and MC models in the epidermis and vessels, respectively, referred to as DA-FEM-ISF. The results show that DA-FEM-SFCC underestimates the light energy deposition in the epidermis and vessels when compared to MC-FEM. The difference is nonlinearly dependent on wavelength, dermal blood volume fraction, vessel size and depth, etc. Thus, the temperature and damage profiles are also dramatically different. DA-FEM-ISF achieves much better results in calculating heat generation and induced thermal damage when compared to MC-FEM, and has the advantages of both calculation speed and accuracy. The disadvantage is that a multidimensional ISF table is needed for DA-FEM-ISF to be a practical modelling tool.
Directory of Open Access Journals (Sweden)
Biniam Yohannes Tesfamicael
2014-03-01
Full Text Available Purpose: To construct a dose monitoring system based on an endorectal balloon coupled to thin scintillating fibers to study the dose to the rectum in proton therapy of prostate cancer.Method: A Geant4 Monte Carlo toolkit was used to simulate the proton therapy of prostate cancer, with an endorectal balloon and a set of scintillating fibers for immobilization and dosimetry measurements, respectively.Results: A linear response of the fibers to the dose delivered was observed to within less than 2%. Results obtained show that fibers close to the prostate recorded higher dose, with the closest fiber recording about one-third of the dose to the target. A 1/r2 (r is defined as center-to-center distance between the prostate and the fibers decrease was observed as one goes toward the frontal and distal regions. A very low dose was recorded by the fibers beneath the balloon which is a clear indication that the overall volume of the rectal wall that is exposed to a higher dose is relatively minimized. Further analysis showed a relatively linear relationship between the dose to the target and the dose to the top fibers (total 17, with a slope of (-0.07 ± 0.07 at large number of events per degree of rotation of the modulator wheel (i.e., dose.Conclusion: Thin (1 mm × 1 mm, long (1 m scintillating fibers were found to be ideal for real time in-vivo dose measurement to the rectum during proton therapy of prostate cancer. The linear response of the fibers to the dose delivered makes them good candidates as dosimeters. With thorough calibration and the ability to define a good correlation between the dose to the target and the dose to the fibers, such dosimeters can be used for real time dose verification to the target.-----------------------------------Cite this article as: Tesfamicael BY, Avery S, Gueye P, Lyons D, Mahesh M. Scintillating fiber based in-vivo dose monitoring system to the rectum in proton therapy of prostate cancer: A Geant4 Monte Carlo
Rotational Mixing and Lithium Depletion
Pinsonneault, M H
2010-01-01
I review basic observational features in Population I stars which strongly implicate rotation as a mixing agent; these include dispersion at fixed temperature in coeval populations and main sequence lithium depletion for a range of masses at a rate which decays with time. New developments related to the possible suppression of mixing at late ages, close binary mergers and their lithium signature, and an alternate origin for dispersion in young cool stars tied to radius anomalies observed in active young stars are discussed. I highlight uncertainties in models of Population II lithium depletion and dispersion related to the treatment of angular momentum loss. Finally, the origins of rotation are tied to conditions in the pre-main sequence, and there is thus some evidence that enviroment and planet formation could impact stellar rotational properties. This may be related to recent observational evidence for cluster to cluster variations in lithium depletion and a connection between the presence of planets and s...
Reducing quasi-ergodicity in a double well potential by Tsallis Monte Carlo simulation
Iwamatsu, Masao; Okabe, Yutaka
2000-01-01
A new Monte Carlo scheme based on the system of Tsallis's generalized statistical mechanics is applied to a simple double well potential to calculate the canonical thermal average of potential energy. Although we observed serious quasi-ergodicity when using the standard Metropolis Monte Carlo algorithm, this problem is largely reduced by the use of the new Monte Carlo algorithm. Therefore the ergodicity is guaranteed even for short Monte Carlo steps if we use this new canonical Monte Carlo sc...
Replacements For Ozone-Depleting Foaming Agents
Blevins, Elana; Sharpe, Jon B.
1995-01-01
Fluorinated ethers used in place of chlorofluorocarbons and hydrochlorofluorocarbons. Replacement necessary because CFC's and HCFC's found to contribute to depletion of ozone from upper atmosphere, and manufacture and use of them by law phased out in near future. Two fluorinated ethers do not have ozone-depletion potential and used in existing foam-producing equipment, designed to handle liquid blowing agents soluble in chemical ingredients that mixed to make foam. Any polyurethane-based foams and several cellular plastics blown with these fluorinated ethers used in processes as diverse as small batch pours, large sprays, or double-band lamination to make insulation for private homes, commercial buildings, shipping containers, and storage tanks. Fluorinated ethers proved useful as replacements for CFC refrigerants and solvents.
Institute of Scientific and Technical Information of China (English)
徐小波; 张鹤鸣; 胡辉勇
2011-01-01
文章研究了SOI衬底上SiGe npn异质结晶体管集电结耗尽电荷和电容.根据器件实际工作情况,基于课题组前面的工作,对耗尽电荷和电容模型进行扩展和优化.研究结果表明,耗尽电荷模型具有更好的光滑性;耗尽电容模型为纵向耗尽与横向耗尽电容的串联,考虑了不同电流流动面积,与常规器件相比,SOI器件全耗尽工作模式下表现出更小的集电结耗尽电容,因此更大的正向Early电压;在纵向工作模式到横向工作模式转变的电压偏置点,耗尽电荷和电容的变化趋势发生改变.SOI薄膜上纵向SiGe HBT集电结耗尽电荷和电容模型的建立和扩展为毫米波SOI BiCMOS工艺中双极器件核心参数如Early电压、特征频率等的设计提供了有价值的参考.%The SiGe heterojunction bipolar transistor （HBT） on thin film SOI is successfully integrated with SOI CMOS by ＂folded collector＂.This paper deals with the collector depletion charge and the capacitance of this structure.An optimized model is presented based on our previous research.The results show that the charge model is smoother,and that the capacitance model with considering different current flow areas,is vertical and horizontal depletion capacitances in series,showing that the depletion capacitance is smaller than that of a bulk HBT.The charge and capacitance vary with the increase of reverse collector-base bias.This collector depletion charge and capacitance model provides valuable reference to the SOI SiGe HBT electrical parameters design and simulation such as Early voltage and transit frequency in the latest 0.13μm SOI BiCMOS technology.
Pain, F.; Dhenain, M.; Gurden, H.; Routier, A. L.; Lefebvre, F.; Mastrippolito, R.; Lanièce, P.
2008-10-01
The β-microprobe is a simple and versatile technique complementary to small animal positron emission tomography (PET). It relies on local measurements of the concentration of positron-labeled molecules. So far, it has been successfully used in anesthetized rats for pharmacokinetics experiments and for the study of brain energetic metabolism. However, the ability of the technique to provide accurate quantitative measurements using 18F, 11C and 15O tracers is likely to suffer from the contribution of 511 keV gamma rays background to the signal and from the contribution of positrons from brain loci surrounding the locus of interest. The aim of the present paper is to provide a method of evaluating several parameters, which are supposed to affect the quantification of recordings performed in vivo with this methodology. We have developed realistic voxelized phantoms of the rat whole body and brain, and used them as input geometries for Monte Carlo simulations of previous β-microprobe reports. In the context of realistic experiments (binding of 11C-Raclopride to D2 dopaminergic receptors in the striatum; local glucose metabolic rate measurement with 18F-FDG and H2O15 blood flow measurements in the somatosensory cortex), we have calculated the detection efficiencies and corresponding contribution of 511 keV gammas from peripheral organs accumulation. We confirmed that the 511 keV gammas background does not impair quantification. To evaluate the contribution of positrons from adjacent structures, we have developed β-Assistant, a program based on a rat brain voxelized atlas and matrices of local detection efficiencies calculated by Monte Carlo simulations for several probe geometries. This program was used to calculate the 'apparent sensitivity' of the probe for each brain structure included in the detection volume. For a given localization of a probe within the brain, this allows us to quantify the different sources of beta signal. Finally, since stereotaxic accuracy is
Pineda Rojas, Andrea L.; Venegas, Laura E.; Mazzeo, Nicolás A.
2016-09-01
A simple urban air quality model [MODelo de Dispersión Atmosférica Ubana - Generic Reaction Set (DAUMOD-GRS)] was recently developed. One-hour peak O3 concentrations in the Metropolitan Area of Buenos Aires (MABA) during the summer estimated with the DAUMOD-GRS model have shown values lower than 20 ppb (the regional background concentration) in the urban area and levels greater than 40 ppb in its surroundings. Due to the lack of measurements outside the MABA, these relatively high ozone modelled concentrations constitute the only estimate for the area. In this work, a methodology based on the Monte Carlo analysis is implemented to evaluate the uncertainty in these modelled concentrations associated to possible errors of the model input data. Results show that the larger 1-h peak O3 levels in the MABA during the summer present larger uncertainties (up to 47 ppb). On the other hand, multiple linear regression analysis is applied at selected receptors in order to identify the variables explaining most of the obtained variance. Although their relative contributions vary spatially, the uncertainty of the regional background O3 concentration dominates at all the analysed receptors (34.4-97.6%), indicating that their estimations could be improved to enhance the ability of the model to simulate peak O3 concentrations in the MABA.
Montanari, Davide; Silvestri, Chiara; Graves, Yan J; Yan, Hao; Cervino, Laura; Rice, Roger; Jiang, Steve B; Jia, Xun
2013-01-01
Cone beam CT (CBCT) has been widely used for patient setup in image guided radiation therapy (IGRT). Radiation dose from CBCT scans has become a clinical concern. The purposes of this study are 1) to commission a GPU-based Monte Carlo (MC) dose calculation package gCTD for Varian On-Board Imaging (OBI) system and test the calculation accuracy, and 2) to quantitatively evaluate CBCT dose from the OBI system in typical IGRT scan protocols. We first conducted dose measurements in a water phantom. X-ray source model parameters used in gCTD are obtained through a commissioning process. gCTD accuracy is demonstrated by comparing calculations with measurements in water and in CTDI phantoms. 25 brain cancer patients are used to study dose in a standard-dose head protocol, and 25 prostate cancer patients are used to study dose in pelvis protocol and pelvis spotlight protocol. Mean dose to each organ is calculated. Mean dose to 2% voxels that have the highest dose is also computed to quantify the maximum dose. It is fo...
Shao, Dongguo; Yang, Haidong; Xiao, Yi; Liu, Biyu
2014-01-01
A new method is proposed based on the finite difference method (FDM), differential evolution algorithm and Markov Chain Monte Carlo (MCMC) simulation to identify water quality model parameters of an open channel in a long distance water transfer project. Firstly, this parameter identification problem is considered as a Bayesian estimation problem and the forward numerical model is solved by FDM, and the posterior probability density function of the parameters is deduced. Then these parameters are estimated using a sampling method with differential evolution algorithm and MCMC simulation. Finally this proposed method is compared with FDM-MCMC by a twin experiment. The results show that the proposed method can be used to identify water quality model parameters of an open channel in a long distance water transfer project under different scenarios better with fewer iterations, higher reliability and anti-noise capability compared with FDM-MCMC. Therefore, it provides a new idea and method to solve the traceability problem in sudden water pollution accidents.
Sahu, Nityananda; Gadre, Shridhar R; Rakshit, Avijit; Bandyopadhyay, Pradipta; Miliordos, Evangelos; Xantheas, Sotiris S
2014-10-28
We report new global minimum candidate structures for the (H2O)25 cluster that are lower in energy than the ones reported previously and correspond to hydrogen bonded networks with 42 hydrogen bonds and an interior, fully coordinated water molecule. These were obtained as a result of a hierarchical approach based on initial Monte Carlo Temperature Basin Paving sampling of the cluster's Potential Energy Surface with the Effective Fragment Potential, subsequent geometry optimization using the Molecular Tailoring Approach with the fragments treated at the second order Møller-Plesset (MP2) perturbation (MTA-MP2) and final refinement of the entire cluster at the MP2 level of theory. The MTA-MP2 optimized cluster geometries, constructed from the fragments, were found to be within <0.5 kcal/mol from the minimum geometries obtained from the MP2 optimization of the entire (H2O)25 cluster. In addition, the grafting of the MTA-MP2 energies yields electronic energies that are within <0.3 kcal/mol from the MP2 energies of the entire cluster while preserving their energy rank order. Finally, the MTA-MP2 approach was found to reproduce the MP2 harmonic vibrational frequencies, constructed from the fragments, quite accurately when compared to the MP2 ones of the entire cluster in both the HOH bending and the OH stretching regions of the spectra.
Energy Technology Data Exchange (ETDEWEB)
Kurosu, Keita [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Takashina, Masaaki; Koizumi, Masahiko [Department of Medical Physics and Engineering, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P., E-mail: vadim.p.moskvin@gmail.com [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States)
2014-10-01
Although three general-purpose Monte Carlo (MC) simulation tools: Geant4, FLUKA and PHITS have been used extensively, differences in calculation results have been reported. The major causes are the implementation of the physical model, preset value of the ionization potential or definition of the maximum step size. In order to achieve artifact free MC simulation, an optimized parameters list for each simulation system is required. Several authors have already proposed the optimized lists, but those studies were performed with a simple system such as only a water phantom. Since particle beams have a transport, interaction and electromagnetic processes during beam delivery, establishment of an optimized parameters-list for whole beam delivery system is therefore of major importance. The purpose of this study was to determine the optimized parameters list for GATE and PHITS using proton treatment nozzle computational model. The simulation was performed with the broad scanning proton beam. The influences of the customizing parameters on the percentage depth dose (PDD) profile and the proton range were investigated by comparison with the result of FLUKA, and then the optimal parameters were determined. The PDD profile and the proton range obtained from our optimized parameters list showed different characteristics from the results obtained with simple system. This led to the conclusion that the physical model, particle transport mechanics and different geometry-based descriptions need accurate customization in planning computational experiments for artifact-free MC simulation.
Kadoura, Ahmad Salim
2013-06-01
In this work, a method to estimate solid elemental sulfur solubility in pure and gas mixtures using Monte Carlo (MC) molecular simulation is proposed. This method is based on Isobaric-Isothermal (NPT) ensemble and the Widom insertion technique for the gas phase and a continuum model for the solid phase. This method avoids the difficulty of having to deal with high rejection rates that are usually encountered when simulating using Gibbs ensemble. The application of this method is tested with a system made of pure hydrogen sulfide gas (H2S) and solid elemental sulfur. However, this technique may be used for other solid-vapor systems provided the fugacity of the solid phase is known (e.g., through experimental work). Given solid fugacity at the desired pressure and temperature, the mole fraction of the solid dissolved in gas that would be in chemical equilibrium with the solid phase might be obtained. In other words a set of MC molecular simulation experiments is conducted on a single box given the pressure and temperature and for different mole fractions of the solute. The fugacity of the gas mixture is determined using the Widom insertion method and is compared with that predetermined for the solid phase until one finds the mole fraction which achieves the required fugacity. In this work, several examples of MC have been conducted and compared with experimental data. The Lennard-Jones parameters related to the sulfur molecule model (ɛ, σ) have been optimized to achieve better match with the experimental work.
Global depletion of groundwater resources
Wada, Y.; Beek, L.P.H. van; van Kempen, C.M.; Reckman, J.W.T.M.; Vasak, S.; Bierkens, M.F.P.
2010-01-01
In regions with frequent water stress and large aquifer systems groundwater is often used as an additional water source. If groundwater abstraction exceeds the natural groundwater recharge for extensive areas and long times, overexploitation or persistent groundwater depletion occurs. Here we provid
Energy Technology Data Exchange (ETDEWEB)
Jin, L; Eldib, A; Li, J; Price, R; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)
2015-06-15
Purpose: Uneven nose surfaces and air cavities underneath and the use of bolus present complexity and dose uncertainty when using a single electron energy beam to plan treatments of nose skin with a pencil beam-based planning system. This work demonstrates more accurate dose calculation and more optimal planning using energy and intensity modulated electron radiotherapy (MERT) delivered with a pMLC. Methods: An in-house developed Monte Carlo (MC)-based dose calculation/optimization planning system was employed for treatment planning. Phase space data (6, 9, 12 and 15 MeV) were used as an input source for MC dose calculations for the linac. To reduce the scatter-caused penumbra, a short SSD (61 cm) was used. Our previous work demonstrates good agreement in percentage depth dose and off-axis dose between calculations and film measurement for various field sizes. A MERT plan was generated for treating the nose skin using a patient geometry and a dose volume histogram (DVH) was obtained. The work also shows the comparison of 2D dose distributions between a clinically used conventional single electron energy plan and the MERT plan. Results: The MERT plan resulted in improved target dose coverage as compared to the conventional plan, which demonstrated a target dose deficit at the field edge. The conventional plan showed higher dose normal tissue irradiation underneath the nose skin while the MERT plan resulted in improved conformity and thus reduces normal tissue dose. Conclusion: This preliminary work illustrates that MC-based MERT planning is a promising technique in treating nose skin, not only providing more accurate dose calculation, but also offering an improved target dose coverage and conformity. In addition, this technique may eliminate the necessity of bolus, which often produces dose delivery uncertainty due to the air gaps that may exist between the bolus and skin.
Energy Technology Data Exchange (ETDEWEB)
Guberina, Nika; Suntharalingam, Saravanabavaan; Nassenstein, Kai; Forsting, Michael; Theysohn, Jens; Wetter, Axel; Ringelstein, Adrian [University Hospital Essen, Institute of Diagnostic and Interventional Radiology and Neuroradiology, Essen (Germany)
2016-10-15
The aim of this study was to verify the results of a dose monitoring software tool based on Monte Carlo Simulation (MCS) in assessment of eye lens doses for cranial CT scans. In cooperation with the Federal Office for Radiation Protection (Neuherberg, Germany), phantom measurements were performed with thermoluminescence dosimeters (TLD LiF:Mg,Ti) using cranial CT protocols: (I) CT angiography; (II) unenhanced, cranial CT scans with gantry angulation at a single and (III) without gantry angulation at a dual source CT scanner. Eye lens doses calculated by the dose monitoring tool based on MCS and assessed with TLDs were compared. Eye lens doses are summarized as follows: (I) CT angiography (a) MCS 7 mSv, (b) TLD 5 mSv; (II) unenhanced, cranial CT scan with gantry angulation, (c) MCS 45 mSv, (d) TLD 5 mSv; (III) unenhanced, cranial CT scan without gantry angulation (e) MCS 38 mSv, (f) TLD 35 mSv. Intermodality comparison shows an inaccurate calculation of eye lens doses in unenhanced cranial CT protocols at the single source CT scanner due to the disregard of gantry angulation. On the contrary, the dose monitoring tool showed an accurate calculation of eye lens doses at the dual source CT scanner without gantry angulation and for CT angiography examinations. The dose monitoring software tool based on MCS gave accurate estimates of eye lens doses in cranial CT protocols. However, knowledge of protocol and software specific influences is crucial for correct assessment of eye lens doses in routine clinical use. (orig.)
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Leonardo Rossi
Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...
Pedro Medina Avendaño
1981-01-01
Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Alternative Monte Carlo Approach for General Global Illumination
Institute of Scientific and Technical Information of China (English)
徐庆; 李朋; 徐源; 孙济洲
2004-01-01
An alternative Monte Carlo strategy for the computation of global illumination problem was presented.The proposed approach provided a new and optimal way for solving Monte Carlo global illumination based on the zero variance importance sampling procedure. A new importance driven Monte Carlo global illumination algorithm in the framework of the new computing scheme was developed and implemented. Results, which were obtained by rendering test scenes, show that this new framework and the newly derived algorithm are effective and promising.
Mitochondrial DNA depletion analysis by pseudogene ratioing.
Swerdlow, Russell H; Redpath, Gerard T; Binder, Daniel R; Davis, John N; VandenBerg, Scott R
2006-01-30
The mitochondrial DNA (mtDNA) depletion status of rho(0) cell lines is typically assessed by hybridization or polymerase chain reaction (PCR) experiments, in which the failure to hybridize mtDNA or amplify mtDNA using mtDNA-directed primers suggests thorough mitochondrial genome removal. Here, we report the use of an mtDNA pseudogene ratioing technique for the additional confirmation of rho0 status. Total genomic DNA from a U251 human glioma cell line treated with ethidium bromide was amplified using primers designed to anneal either mtDNA or a previously described nuclear DNA-embedded mtDNA pseudogene (mtDNApsi). The resultant PCR product was used to generate plasmid clones. Sixty-two plasmid clones were genotyped, and all arose from mtDNApsi template. These data allowed us to determine with 95% confidence that the resultant mtDNA-depleted cell line contains less than one copy of mtDNA per 10 cells. Unlike previous hybridization or PCR-based analyses of mtDNA depletion, this mtDNApsi ratioing technique does not rely on interpretation of a negative result, and may prove useful as an adjunct for the determination of rho0 status or mtDNA copy number.
Bromberger, B; Brandis, M; Dangendorf, V; Goldberg, M B; Kaufmann, F; Mor, I; Nolte, R; Schmiedel, M; Tittelmeier, K; Vartsky, D; Wershofen, H
2012-01-01
An air cargo inspection system combining two nuclear reaction based techniques, namely Fast-Neutron Resonance Radiography and Dual-Discrete-Energy Gamma Radiography is currently being developed. This system is expected to allow detection of standard and improvised explosives as well as special nuclear materials. An important aspect for the applicability of nuclear techniques in an airport inspection facility is the inventory and lifetimes of radioactive isotopes produced by the neutron and gamma radiation inside the cargo, as well as the dose delivered by these isotopes to people in contact with the cargo during and following the interrogation procedure. Using MCNPX and CINDER90 we have calculated the activation levels for several typical inspection scenarios. One example is the activation of various metal samples embedded in a cotton-filled container. To validate the simulation results, a benchmark experiment was performed, in which metal samples were activated by fast-neutrons in a water-filled glass jar. T...
Boswell, Melissa; Detwiler, Jason A; Finnerty, Padraic; Henning, Reyco; Gehman, Victor M; Johnson, Rob A; Jordan, David V; Kazkaz, Kareem; Knapp, Markus; Kröninger, Kevin; Lenz, Daniel; Leviner, Lance; Liu, Jing; Liu, Xiang; MacMullin, Sean; Marino, Michael G; Mokhtarani, Akbar; Pandola, Luciano; Schubert, Alexis G; Schubert, Jens; Tomei, Claudia; Volynets, Oleksandr
2010-01-01
We describe a physics simulation software framework, MAGE, that is based on the GEANT4 simulation toolkit. MAGE is used to simulate the response of ultra-low radioactive background radiation detectors to ionizing radiation, specifically the MAJORANA and GERDA neutrinoless double-beta decay experiments. MAJORANA and GERDA use high-purity germanium detectors to search for the neutrinoless double-beta decay of 76Ge, and MAGE is jointly developed between these two collaborations. The MAGE framework contains the geometry models of common objects, prototypes, test stands, and the actual experiments. It also implements customized event generators, GEANT4 physics lists, and output formats. All of these features are available as class libraries that are typically compiled into a single executable. The user selects the particular experimental setup implementation at run-time via macros. The combination of all these common classes into one framework reduces duplication of efforts, eases comparison between simulated data...
Energy Technology Data Exchange (ETDEWEB)
Semenenko, Vladimir; Stewart, Robert D.; Ackerman, Eric J.
2005-12-31
Single-cell irradiators and new experimental assays are rapidly expanding our ability to quantify the molecular mechanisms responsible for phenomena such as toxicant-induced adaptations in DNA repair and signal-mediated changes to the genome stability of cells not directly damaged by radiation (i.e., bystander cells). To advance our understanding of, and ability to predict and mitigate, the potentially harmful effects of radiological agents, effective strategies must be devised to incorporate information from molecular and cellular studies into mechanism-based, hierarchical models. A key advantage of the hierarchical modeling approach is that information from DNA repair and other in vitro assays can be systematically integrated into higher-level cell transformation and, eventually, carcinogenesis models. This presentation will outline the hierarchical modeling strategy used to integrate information from in vitro studies into the Virtual Cell (VC) radiobiology software (see Endnote). A new multi-path genomic instability model will be introduced and used to link biochemical processing of double strand breaks (DSBs) to neoplastic cell transformation. Bystander and directly damaged cells are treated explicitly in the model using a microdosimetric approach, although many of the details of the bystander response model are of a necessarily preliminary nature. The new model will be tested against several published radiobiological datasets. Results illustrating how hypothesized bystander mechanisms affect the shape of dose-response curves for neoplastic transformation as a function of Linear Energy Transfer (LET) will be presented. EndNote: R.D. Stewart, Virtual Cell (VC) Radiobiology Software. PNNL-13579, July 2001. Available at http://www.pnl.gov/berc/kbem/vc/ The DNA repair model used in the VC computer program is based on the Two-Lesion Kinetic (TLK) model [Radiat. Res. 156(4), 365-378 October 2001].
Edimo, P; Clermont, C; Kwato, M G; Vynckier, S
2009-09-01
In the present work, Monte Carlo (MC) models of electron beams (energies 4, 12 and 18MeV) from an Elekta SL25 medical linear accelerator were simulated using EGSnrc/BEAMnrc user code. The calculated dose distributions were benchmarked by comparison with measurements made in a water phantom for a wide range of open field sizes and insert combinations, at a single source-to-surface distance (SSD) of 100cm. These BEAMnrc models were used to evaluate the accuracy of a commercial MC dose calculation engine for electron beam treatment planning (Oncentra MasterPlan Treament Planning System (OMTPS) version 1.4, Nucletron) for two energies, 4 and 12MeV. Output factors were furthermore measured in the water phantom and compared to BEAMnrc and OMTPS. The overall agreement between predicted and measured output factors was comparable for both BEAMnrc and OMTPS, except for a few asymmetric and/or small insert cutouts, where larger deviations between measurements and the values predicted from BEAMnrc as well as OMTPS computations were recorded. However, in the heterogeneous phantom, differences between BEAMnrc and measurements ranged from 0.5 to 2.0% between two ribs and 0.6-1.0% below the ribs, whereas the range difference between OMTPS and measurements was the same (0.5-4.0%) in both areas. With respect to output factors, the overall agreement between BEAMnrc and measurements was usually within 1.0% whereas differences up to nearly 3.0% were observed for OMTPS. This paper focuses on a comparison for clinical cases, including the effects of electron beam attenuations in a heterogeneous phantom. It, therefore, complements previously reported data (only based on measurements) in one other paper on commissioning of the VMC++ dose calculation engine. These results demonstrate that the VMC++ algorithm is more robust in predicting dose distribution than Pencil beam based algorithms for the electron beams investigated.
Image quality assessment of LaBr{sub 3}-based whole-body 3D PET scanners: a Monte Carlo evaluation
Energy Technology Data Exchange (ETDEWEB)
Surti, S [Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104 (United States); Karp, J S [Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104 (United States); Muehllehner, G [Philips Medical Systems, Philadelphia, PA 19104 (United States)
2004-10-07
The main thrust for this work is the investigation and design of a whole-body PET scanner based on new lanthanum bromide scintillators. We use Monte Carlo simulations to generate data for a 3D PET scanner based on LaBr{sub 3} detectors, and to assess the count-rate capability and the reconstructed image quality of phantoms with hot and cold spheres using contrast and noise parameters. Previously we have shown that LaBr{sub 3} has very high light output, excellent energy resolution and fast timing properties which can lead to the design of a time-of-flight (TOF) whole-body PET camera. The data presented here illustrate the performance of LaBr{sub 3} without the additional benefit of TOF information, although our intention is to develop a scanner with TOF measurement capability. The only drawbacks of LaBr{sub 3} are the lower stopping power and photo-fraction which affect both sensitivity and spatial resolution. However, in 3D PET imaging where energy resolution is very important for reducing scattered coincidences in the reconstructed image, the image quality attained in a non-TOF LaBr{sub 3} scanner can potentially equal or surpass that achieved with other high sensitivity scanners. Our results show that there is a gain in NEC arising from the reduced scatter and random fractions in a LaBr{sub 3} scanner. The reconstructed image resolution is slightly worse than a high-Z scintillator, but at increased count-rates, reduced pulse pileup leads to an image resolution similar to that of LSO. Image quality simulations predict reduced contrast for small hot spheres compared to an LSO scanner, but improved noise characteristics at similar clinical activity levels.
Associative Interactions in Crowded Solutions of Biopolymers Counteract Depletion Effects.
Groen, Joost; Foschepoth, David; te Brinke, Esra; Boersma, Arnold J; Imamura, Hiromi; Rivas, Germán; Heus, Hans A; Huck, Wilhelm T S
2015-10-14
The cytosol of Escherichia coli is an extremely crowded environment, containing high concentrations of biopolymers which occupy 20-30% of the available volume. Such conditions are expected to yield depletion forces, which strongly promote macromolecular complexation. However, crowded macromolecule solutions, like the cytosol, are very prone to nonspecific associative interactions that can potentially counteract depletion. It remains unclear how the cytosol balances these opposing interactions. We used a FRET-based probe to systematically study depletion in vitro in different crowded environments, including a cytosolic mimic, E. coli lysate. We also studied bundle formation of FtsZ protofilaments under identical crowded conditions as a probe for depletion interactions at much larger overlap volumes of the probe molecule. The FRET probe showed a more compact conformation in synthetic crowding agents, suggesting strong depletion interactions. However, depletion was completely negated in cell lysate and other protein crowding agents, where the FRET probe even occupied slightly more volume. In contrast, bundle formation of FtsZ protofilaments proceeded as readily in E. coli lysate and other protein solutions as in synthetic crowding agents. Our experimental results and model suggest that, in crowded biopolymer solutions, associative interactions counterbalance depletion forces for small macromolecules. Furthermore, the net effects of macromolecular crowding will be dependent on both the size of the macromolecule and its associative interactions with the crowded background.
Ozone Depletion from Nearby Supernovae
Gehrels, Neil; Laird, Claude M.; Jackman, Charles H.; Cannizzo, John K.; Mattson, Barbara J.; Chen, Wan; Bhartia, P. K. (Technical Monitor)
2002-01-01
Estimates made in the 1970's indicated that a supernova occurring within tens of parsecs of Earth could have significant effects on the ozone layer. Since that time improved tools for detailed modeling of atmospheric chemistry have been developed to calculate ozone depletion, and advances have been made also in theoretical modeling of supernovae and of the resultant gamma ray spectra. In addition, one now has better knowledge of the occurrence rate of supernovae in the galaxy, and of the spatial distribution of progenitors to core-collapse supernovae. We report here the results of two-dimensional atmospheric model calculations that take as input the spectral energy distribution of a supernova, adopting various distances from Earth and various latitude impact angles. In separate simulations we calculate the ozone depletion due to both gamma rays and cosmic rays. We find that for the combined ozone depletion from these effects roughly to double the 'biologically active' UV flux received at the surface of the Earth, the supernova must occur at approximately or less than 8 parsecs.
Ozone Depletion from Nearby Supernovae
Gehrels, N; Jackman, C H; Cannizzo, J K; Mattson, B J; Chen, W; Gehrels, Neil; Laird, Claude M.; Jackman, Charles H.; Cannizzo, John K.; Mattson, Barbara J.; Chen, Wan
2003-01-01
Estimates made in the 1970's indicated that a supernova occurring within tens of parsecs of Earth could have significant effects on the ozone layer. Since that time, improved tools for detailed modeling of atmospheric chemistry have been developed to calculate ozone depletion, and advances have been made in theoretical modeling of supernovae and of the resultant gamma-ray spectra. In addition, one now has better knowledge of the occurrence rate of supernovae in the galaxy, and of the spatial distribution of progenitors to core-collapse supernovae. We report here the results of two-dimensional atmospheric model calculations that take as input the spectral energy distribution of a supernova, adopting various distances from Earth and various latitude impact angles. In separate simulations we calculate the ozone depletion due to both gamma-rays and cosmic rays. We find that for the combined ozone depletion roughly to double the ``biologically active'' UV flux received at the surface of the Earth, the supernova mu...
HD depletion in starless cores
Sipilä, O; Harju, J
2013-01-01
Aims: We aim to investigate the abundances of light deuterium-bearing species such as HD, H2D+ and D2H+ in a gas-grain chemical model including an extensive description of deuterium and spin state chemistry, in physical conditions appropriate to the very centers of starless cores. Methods: We combine a gas-grain chemical model with radiative transfer calculations to simulate density and temperature structure in starless cores. The chemical model includes deuterated forms of species with up to 4 atoms and the spin states of the light species H2, H2+ and H3+ and their deuterated forms. Results: We find that HD eventually depletes from the gas phase because deuterium is efficiently incorporated to grain-surface HDO, resulting in inefficient HD production on grains. HD depletion has consequences not only on the abundances of e.g. H2D+ and D2H+, whose production depends on the abundance of HD, but also on the spin state abundance ratios of the various light species, when compared with the complete depletion model ...
Monte Carlo and nonlinearities
Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian
2016-01-01
The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...
He, Li; Huang, Gordon; Lu, Hongwei; Wang, Shuo; Xu, Yi
2012-06-15
This paper presents a global uncertainty and sensitivity analysis (GUSA) framework based on global sensitivity analysis (GSA) and generalized likelihood uncertainty estimation (GLUE) methods. Quasi-Monte Carlo (QMC) is employed by GUSA to obtain realizations of uncertain parameters, which are then input to the simulation model for analysis. Compared to GLUE, GUSA can not only evaluate global sensitivity and uncertainty of modeling parameter sets, but also quantify the uncertainty in modeling prediction sets. Moreover, GUSA's another advantage lies in alleviation of computational effort, since those globally-insensitive parameters can be identified and removed from the uncertain-parameter set. GUSA is applied to a practical petroleum-contaminated site in Canada to investigate free product migration and recovery processes under aquifer remediation operations. Results from global sensitivity analysis show that (1) initial free product thickness has the most significant impact on total recovery volume but least impact on residual free product thickness and recovery rate; (2) total recovery volume and recovery rate are sensitive to residual LNAPL phase saturations and soil porosity. Results from uncertainty predictions reveal that the residual thickness would remain high and almost unchanged after about half-year of skimmer-well scheme; the rather high residual thickness (0.73-1.56 m 20 years later) indicates that natural attenuation would not be suitable for the remediation. The largest total recovery volume would be from water pumping, followed by vacuum pumping, and then skimmer. The recovery rates of the three schemes would rapidly decrease after 2 years (less than 0.05 m(3)/day), thus short-term remediation is not suggested.
Factor Graph Equalization Based on Markov Chain Monte Carlo%基于Markov链蒙特卡洛的因子图均衡算法
Institute of Scientific and Technical Information of China (English)
巩克现; 董政; 葛临东
2012-01-01
Three different effective solutions and parallel implement method of computing the posteriori probability of received signal were proposed to overcome the high calculation complexity of iterative equalization based on factor graph for nonlinear channel distor- tion. Equalizer and decoder work interactively in factor graph equalization and the performance of the system was improved while the calculation complexity grew exponentially with channel memory length. Multidimensional integration was adopted by Markov chain Monte Carlo algorithm and parallel Gibbs sampling was implemented by factor graph partition. The calculation complexity was reduced. Simulation demonstrated that it overcomes the non-linear distortion of high order modulation over satellite channel and it is suitable to hardware or multi-core implement.%针对基于因子图模型的非线性失真信道的迭代均衡计算复杂度高的问题，提出了3种不同的接收信息后验概率的有效算法以及并行实现方法。在基于因子图的均衡算法中，均衡器和译码器以迭代处理的方式联合工作，提高了系统的整体性能，但计算复杂度随信道记忆长度呈指数增加，通过Markov链蒙特卡洛算法实现多维积分的计算，并通过因子图分割实现并行Gibbs采样，降低了计算复杂度，仿真表明，该算法有效克服宽带高阶调制的卫星信道非线性失真，有利于硬件或多核并行实现。
Fransson, Martin Niclas; Barregard, Lars; Sallsten, Gerd; Akerstrom, Magnus; Johanson, Gunnar
2014-10-01
The health effects of low-level chronic exposure to cadmium are increasingly recognized. To improve the risk assessment, it is essential to know the relation between cadmium intake, body burden, and biomarker levels of cadmium. We combined a physiologically-based toxicokinetic (PBTK) model for cadmium with a data set from healthy kidney donors to re-estimate the model parameters and to test the effects of gender and serum ferritin on systemic uptake. Cadmium levels in whole blood, blood plasma, kidney cortex, and urinary excretion from 82 men and women were used to calculate posterior distributions for model parameters using Markov-chain Monte Carlo analysis. For never- and ever-smokers combined, the daily systemic uptake was estimated at 0.0063 μg cadmium/kg body weight in men, with 35% increased uptake in women and a daily uptake of 1.2 μg for each pack-year per calendar year of smoking. The rate of urinary excretion from cadmium accumulated in the kidney was estimated at 0.000042 day(-1), corresponding to a half-life of 45 years in the kidneys. We have provided an improved model of cadmium kinetics. As the new parameter estimates derive from a single study with measurements in several compartments in each individual, these new estimates are likely to be more accurate than the previous ones where the data used originated from unrelated data sets. The estimated urinary excretion of cadmium accumulated in the kidneys was much lower than previous estimates, neglecting this finding may result in a marked under-prediction of the true kidney burden.
Ku, B.; Nam, M.
2012-12-01
Neutron logging has been widely used to estimate neutron porosity to evaluate formation properties in oil industry. More recently, neutron logging has been highlighted for monitoring the behavior of CO2 injected into reservoir for geological CO2 sequestration. For a better understanding of neutron log interpretation, Monte Carlo N-Particle (MCNP) algorithm is used to illustrate the response of a neutron tool. In order to obtain calibration curves for the neutron tool, neutron responses are simulated in water-filled limestone, sandstone and dolomite formations of various porosities. Since the salinities (concentration of NaCl) of borehole fluid and formation water are important factors for estimating formation porosity, we first compute and analyze neutron responses for brine-filled formations with different porosities. Further, we consider changes in brine saturation of a reservoir due to hydrocarbon production or geological CO2 sequestration to simulate corresponding neutron logging data. As gas saturation decreases, measured neutron porosity confirms gas effects on neutron logging, which is attributed to the fact that gas has slightly smaller number of hydrogen than brine water. In the meantime, increase in CO2 saturation due to CO2 injection reduces measured neutron porosity giving a clue to estimation the CO2 saturation, since the injected CO2 substitute for the brine water. A further analysis on the reduction gives a strategy for estimating CO2 saturation based on time-lapse neutron logging. This strategy can help monitoring not only geological CO2 sequestration but also CO2 flood for enhanced-oil-recovery. Acknowledgements: This work was supported by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2012T100201588). Myung Jin Nam was partially supported by the National Research Foundation of Korea(NRF) grant funded by the Korea
Setiani, Tia Dwi; Suprijadi, Haryanto, Freddy
2016-03-01
Monte Carlo (MC) is one of the powerful techniques for simulation in x-ray imaging. MC method can simulate the radiation transport within matter with high accuracy and provides a natural way to simulate radiation transport in complex systems. One of the codes based on MC algorithm that are widely used for radiographic images simulation is MC-GPU, a codes developed by Andrea Basal. This study was aimed to investigate the time computation of x-ray imaging simulation in GPU (Graphics Processing Unit) compared to a standard CPU (Central Processing Unit). Furthermore, the effect of physical parameters to the quality of radiographic images and the comparison of image quality resulted from simulation in the GPU and CPU are evaluated in this paper. The simulations were run in CPU which was simulated in serial condition, and in two GPU with 384 cores and 2304 cores. In simulation using GPU, each cores calculates one photon, so, a large number of photon were calculated simultaneously. Results show that the time simulations on GPU were significantly accelerated compared to CPU. The simulations on the 2304 core of GPU were performed about 64 -114 times faster than on CPU, while the simulation on the 384 core of GPU were performed about 20 - 31 times faster than in a single core of CPU. Another result shows that optimum quality of images from the simulation was gained at the history start from 108 and the energy from 60 Kev to 90 Kev. Analyzed by statistical approach, the quality of GPU and CPU images are relatively the same.
Directory of Open Access Journals (Sweden)
Pedro Medina Avendaño
1981-01-01
Full Text Available Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...
Directory of Open Access Journals (Sweden)
Charlie Samuya Veric
2001-12-01
Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.
2009-01-01
On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...
Neutron shielding material design based on Monte Carlo simulation%基于蒙特卡罗方法的中子屏蔽材料设计
Institute of Scientific and Technical Information of China (English)
陈飞达; 汤晓斌; 王鹏; 陈达
2012-01-01
基于蒙特卡罗粒子输运程序MCNP,设计了一种强度高、密度低、具有优异中子屏蔽性能的新型玻璃纤维/B4C/环氧树脂复合材料,模拟计算了镅-铍(Am-Be)中子源产生中子对该材料的透射率；研究了该材料的中子屏蔽性能与传统屏蔽材料的差异以及不同B4C质量分数对该材料的屏蔽性能影响；根据模拟结果分析了该材料对不同能区中子(慢中子、中能中子、快中子)具有的不同屏蔽性能.研究发现:B4C质量分数为10％的该种新型玻璃纤维/B4C/环氧树脂复合材料的中子屏蔽性能,尤其是慢中子屏蔽性能较传统的含硼聚乙烯和Al-B4C合金材料更为优异；但随着B4C质量分数的增大,屏蔽性能提升不明显.结果验证了蒙特卡罗方法用于中子屏蔽材料优化设计的可行性.%Based on the Monte Carlo particle transport program MCNP, a noveL glass fiber/B4 C/epoxy resin composite for neutron shielding with high strength and low density was developed. Its neutron transmissivity was calculated under the Am-Be neutron source condition to study the difference of neutron shielding performance between the glass fiber/B4 C/epoxy resin composite and traditional shielding materials. Furthermore, effects of B4 C mass fraction of the composite on the shielding performance for neutrons with different energy(slow neutron, intermediate neutron, fast neutron) were analyzed. The results show the composites with 10% B4C mass contents have more advantages on the neutron shielding performance , especially the slow neutron shielding performance in comparison with polyethylene/boron containing composites and Al-B4 C alloy. With the further increasing of the B4C contents, no remarkable increase is observed. Monte Carlo method is demonstrated feasible in optimization design of neutron shielding materials and the results provide a theoretical basis for design and preparation of a new neutron shielding composite.
Energy Technology Data Exchange (ETDEWEB)
Fragoso, Margarida; Wen Ning; Kumar, Sanath; Liu Dezhi; Ryu, Samuel; Movsas, Benjamin; Munther, Ajlouni; Chetty, Indrin J, E-mail: ichetty1@hfhs.or [Henry Ford Health System, Detroit, MI (United States)
2010-08-21
Modern cancer treatment techniques, such as intensity-modulated radiation therapy (IMRT) and stereotactic body radiation therapy (SBRT), have greatly increased the demand for more accurate treatment planning (structure definition, dose calculation, etc) and dose delivery. The ability to use fast and accurate Monte Carlo (MC)-based dose calculations within a commercial treatment planning system (TPS) in the clinical setting is now becoming more of a reality. This study describes the dosimetric verification and initial clinical evaluation of a new commercial MC-based photon beam dose calculation algorithm, within the iPlan v.4.1 TPS (BrainLAB AG, Feldkirchen, Germany). Experimental verification of the MC photon beam model was performed with film and ionization chambers in water phantoms and in heterogeneous solid-water slabs containing bone and lung-equivalent materials for a 6 MV photon beam from a Novalis (BrainLAB) linear accelerator (linac) with a micro-multileaf collimator (m{sub 3} MLC). The agreement between calculated and measured dose distributions in the water phantom verification tests was, on average, within 2%/1 mm (high dose/high gradient) and was within {+-}4%/2 mm in the heterogeneous slab geometries. Example treatment plans in the lung show significant differences between the MC and one-dimensional pencil beam (PB) algorithms within iPlan, especially for small lesions in the lung, where electronic disequilibrium effects are emphasized. Other user-specific features in the iPlan system, such as options to select dose to water or dose to medium, and the mean variance level, have been investigated. Timing results for typical lung treatment plans show the total computation time (including that for processing and I/O) to be less than 10 min for 1-2% mean variance (running on a single PC with 8 Intel Xeon X5355 CPUs, 2.66 GHz). Overall, the iPlan MC algorithm is demonstrated to be an accurate and efficient dose algorithm, incorporating robust tools for MC-based
Partridge, D.G.; Vrugt, J.A.; Tunved, P.; Ekman, A.M.L.; Struthers, H.; Sooroshian, A.
2012-01-01
This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov chain Monte Carlo (MCMC) algorithm to an adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis tools t
Monte Carlo simulation of neutron scattering instruments
Energy Technology Data Exchange (ETDEWEB)
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
Adiabatic optimization versus diffusion Monte Carlo methods
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
CosmoPMC: Cosmology Population Monte Carlo
Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren
2011-01-01
We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.
Shell model the Monte Carlo way
Energy Technology Data Exchange (ETDEWEB)
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
U.S. Geological Survey, Department of the Interior — This data release includes a polygon shapefile of grid cells attributed with values representing the simulated base-flow, evapotranspiration, and groundwater-storage...
Monte Carlo methods beyond detailed balance
Schram, Raoul D.; Barkema, Gerard T.
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
Monte Carlo Simulation of Counting Experiments.
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
Energy Technology Data Exchange (ETDEWEB)
Lee, C [Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, MD (United States); Badal, A [U.S. Food ' Drug Administration (CDRH/OSEL), Silver Spring, MD (United States)
2014-06-15
Purpose: Computational voxel phantom provides realistic anatomy but the voxel structure may result in dosimetric error compared to real anatomy composed of perfect surface. We analyzed the dosimetric error caused from the voxel structure in hybrid computational phantoms by comparing the voxel-based doses at different resolutions with triangle mesh-based doses. Methods: We incorporated the existing adult male UF/NCI hybrid phantom in mesh format into a Monte Carlo transport code, penMesh that supports triangle meshes. We calculated energy deposition to selected organs of interest for parallel photon beams with three mono energies (0.1, 1, and 10 MeV) in antero-posterior geometry. We also calculated organ energy deposition using three voxel phantoms with different voxel resolutions (1, 5, and 10 mm) using MCNPX2.7. Results: Comparison of organ energy deposition between the two methods showed that agreement overall improved for higher voxel resolution, but for many organs the differences were small. Difference in the energy deposition for 1 MeV, for example, decreased from 11.5% to 1.7% in muscle but only from 0.6% to 0.3% in liver as voxel resolution increased from 10 mm to 1 mm. The differences were smaller at higher energies. The number of photon histories processed per second in voxels were 6.4×10{sup 4}, 3.3×10{sup 4}, and 1.3×10{sup 4}, for 10, 5, and 1 mm resolutions at 10 MeV, respectively, while meshes ran at 4.0×10{sup 4} histories/sec. Conclusion: The combination of hybrid mesh phantom and penMesh was proved to be accurate and of similar speed compared to the voxel phantom and MCNPX. The lowest voxel resolution caused a maximum dosimetric error of 12.6% at 0.1 MeV and 6.8% at 10 MeV but the error was insignificant in some organs. We will apply the tool to calculate dose to very thin layer tissues (e.g., radiosensitive layer in gastro intestines) which cannot be modeled by voxel phantoms.
Optical Fiber Turbidity Sensor Based on Monte Carlo Simulations%基于蒙特卡罗模拟的光纤浊度传感器
Institute of Scientific and Technical Information of China (English)
吴刚; 刘月明; 许宏志; 陈飞华; 黄杰
2014-01-01
在后向散射式浊度测量方法的基础上，采用光纤传感技术，设计了一种 Y形光纤束探头结构的浊度传感器，并在光纤束探头前端配置平面镜作为光反射配合目标。根据朗伯比尔定律通过实验研究了消光系数与浊度的线性关系，基于蒙特卡罗法建立了待测液中的光子散射模型，模拟不同检测情形下的传感器接收光强，优化得到光纤束到平面镜的最佳距离。标定接收光强与消光系数的关系曲线用于测量。此法简单高效，能检测消光系数低至0．059 cm-1的水质，平面镜的有效使用将传感器灵敏度提高10倍以上。此传感器可用于便携式检测，结合空分和时分复用技术可实现在线监测。%Based on the backscattering turbidity measurement method,a turbidity sensor with Y-shaped optical fiber bundle probe structure in conj unction with a plane mirror is designed by using the optical fiber sensor technolo-gy.Turbidity is estimated in terms of total interaction coefficient,a parameter that contains strong signature of the turbidity of a solution.A scattered light model based on Monte Carlo simulations is applied to estimate the power collected by the fiber optic probe.The turbidity sensor is simple,and it′s useful for detecting suspended impurities even in small quantities within a liquid,the total interaction coefficient of which is as low as 0.059 cm-1 .With the reasonable use of the mirror,the sensitivity of the sensor is improved more than 10 times.The proposed sensor can be used for the portable measurements and the on-line monitoring can be realized by combining the space-division multiplexing technology and time-division multiplexing technology.
Lambright, W. Henry
2005-01-01
While the National Aeronautics and Space Administration (NASA) is widely perceived as a space agency, since its inception NASA has had a mission dedicated to the home planet. Initially, this mission involved using space to better observe and predict weather and to enable worldwide communication. Meteorological and communication satellites showed the value of space for earthly endeavors in the 1960s. In 1972, NASA launched Landsat, and the era of earth-resource monitoring began. At the same time, in the late 1960s and early 1970s, the environmental movement swept throughout the United States and most industrialized countries. The first Earth Day event took place in 1970, and the government generally began to pay much more attention to issues of environmental quality. Mitigating pollution became an overriding objective for many agencies. NASA's existing mission to observe planet Earth was augmented in these years and directed more toward environmental quality. In the 1980s, NASA sought to plan and establish a new environmental effort that eventuated in the 1990s with the Earth Observing System (EOS). The Agency was able to make its initial mark via atmospheric monitoring, specifically ozone depletion. An important policy stimulus in many respects, ozone depletion spawned the Montreal Protocol of 1987 (the most significant international environmental treaty then in existence). It also was an issue critical to NASA's history that served as a bridge linking NASA's weather and land-resource satellites to NASA s concern for the global changes affecting the home planet. Significantly, as a global environmental problem, ozone depletion underscored the importance of NASA's ability to observe Earth from space. Moreover, the NASA management team's ability to apply large-scale research efforts and mobilize the talents of other agencies and the private sector illuminated its role as a lead agency capable of crossing organizational boundaries as well as the science-policy divide.
Montesinos A, Fernando; Facultad de Farmacia y Bioquímica de la Universidad Nacional Mayor de San Marcos, Lima, Perú.
2014-01-01
Este personaje es un extraordinario investigador dedicado, durante muchos año, al estudio de la papa, tubérculo del genero Solanum y al infinito número de especies y variedades que cubren los territorios del Perú, Bolivia y Chile, y posiblemente otros países. Originalmente silvestre, hoy como resultado del avance científico constituye un alimento de gran valor en el mundo, desde todo punto de vista.Carlos M. Ochoa nació en el Cusco: se trasladó a Bolivia donde llevó a cabo estudios iniciales,...
Salmon, Stefanie J.; Adriaanse, Marieke A.; De Vet, Emely; Fennis, Bob M.; De Ridder, Denise T. D.
2014-01-01
Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In thre
Energy Technology Data Exchange (ETDEWEB)
Barrera, C A; Moran, M J
2007-08-21
The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS
Directory of Open Access Journals (Sweden)
Pedro Pablo Ferrer Gallego
2012-07-01
Full Text Available RESUMEN: Se presenta y comenta un conjunto de cartas enviadas por Carlos Vicioso a Carlos Pau durante su estancia en Bicorp (Valencia entre 1914 y 1915. Las cartas se encuentran depositadas en el Archivo Histórico del Instituto Botánico de Barcelona. Esta correspondencia epistolar marca el comienzo de la relación científica entre Vicioso y Pau, basada en un primer momento en las consultas que le hace Vicioso al de Segorbe para la determinación de las especies que a través de pliegos de herbario envía desde su estancia en la localidad valenciana. En la actualidad estos pliegos testigo se encuentran conservados en diferentes herbarios oficiales nacionales y también extranjeros, fruto del envío e intercambio de material entre Vicioso y otros botánicos de la época, principalmente con Pau, Sennen y Font Quer.ABSTRACT: Epistolary correspondece between Carlos Vicioso and Carlos Pau during a stay in Bicorp (Valencia. A set of letters sent from Carlos Vicioso to Carlos Pau during a stay in Bicorp (Valencia between 1914 and 1915 are here presented and discussed. The letters are located in the Archivo Histórico del Instituto Botánico de Barcelona. This lengthy correspondence among the authors shows the beginning of their scientific relationship. At first, the correspondence was based on the consults from Vicioso to Pau for the determination of the species which were sent from Bicorp, in the herbarium sheets. Nowadays, these witness sheets are preserved in the national and international herbaria, thanks to the botanical material exchange among Vicioso and other botanist of the time, mainly with Pau, Sennen and Font Quer.
Institute of Scientific and Technical Information of China (English)
张杰梁; 黄洪; 姜苏娜; 杭晨哲; 余时帆
2016-01-01
In order to analyze the uncertainty of air compressor energy efifciency measurement, in view of the complicated mathematical model and the dififculty in using the linear model, the uncertainty evaluation method based on Mathcad and Monte Carlo (Monte-Carlo) method is proposed. Through the Monte-Carlo simulation histogram distribution type selection is proved to be correct, the relative expanded uncertainty of measurement results are obtained. Finally, comparison has been made between the Monte-Carlo method and GUM method. The results shows that the relative expanded uncertainty of two methods are close and meet the requirement.%文中为分析容积式空气压缩机能效测量不确定度，针对其数学模型复杂、难以用线性模型近似等问题，提出了基于Mathcad和蒙特卡洛（Monte-Carlo）法不确定度评定方法，并通过Monte-Carlo模拟直方图验证分布类型选择的正确性，进而得出测量结果的相对扩展不确定度。最后，通过GUM法对所评定的不确定度进行比较与分析。分析结果表明，两种不确定度评定结果相近，相对扩展不确定度满足空气压缩机能效级差的“1/3”要求。
Institute of Scientific and Technical Information of China (English)
张向东; 董胜; 张磊; 张国伟
2012-01-01
The construction cost of breakwaters is large. Once destroyed, the consequences would be very serious. Therefore, correctly calculating breakwater reliability has great significance. With the rapid development of artificial neural network theory, the application of artificial neural network theory in breakwater reliability is gradually attracting more and more attentions. The probabilistic meaning is definite u-sing the artificial neural network-based Monte Carlo method to calculate the failure probability of the vertical breakwaters. The breakwater in Qinhuangdao is taken as an example to inspect and verify the artificial neural network-based Monte Carlo method. All parameters in the sliding failure limit state function and the overturning limit state function are taken as variables. The failure probability and reliability index are calculated using numerical artificial neural network-based Monte Carlo method. The calculation results are compared with those calculated using variable-independent JC method and Monte Carlo simulation (in- ? Eluding direct sampling method and importance sampling method of Monte Carlo simulation). It can be concluded that the reliability indexes calculated using the artificial neural network - based Monte Carlo method are similar to those calculated using the Monte Carlo simulation, but are slightly lower than those calculated using the variable-independent JC method.%防波堤建设费用巨大,且一旦遭到破坏,后果甚为严重,因此,如何准确地计算防波堤的可靠性意义重大.随着人工神经网络理论的快速发展,人工神经网络方法在结构可靠性分析中的应用逐渐得到重视.基于神经网络的Monte Carlo法计算直立式防波堤的可靠性,概率意义明确.以秦皇岛典型直立堤为算例,采用基于神经网络的Monte Carlo法对直立式防波堤进行可靠性分析时,将直立堤滑动破坏和倾覆破坏的极限状态方程中的所有参数均作为变量
Research on Calculating Definite Integral Method Based on Monte-Carlo%基于Monte-Carlo方法计算定积分的算法研究
Institute of Scientific and Technical Information of China (English)
马海峰; 刘宇熹
2011-01-01
Monte-Carlo method is a very important class of numerical methods guided by a statistical probability theory. This paper introduces the algorithm reality of Monte-Carlo method for calculating the definite integral, and makes comparative analysis from the accuracy and time efficiency points with interpolation integral. The experimental results show that the Monte-Carlo method for calculating the definite integral suits for a wide range and the computation efficiency is efficient.%Monte-Carlo方法是一种以概率统计理论为指导的一类非常重要的数值计算方法.本文给出Monte-Carlo方法计算定积分的算法实现,并从准确率和时间效率上与插值积分法求积分进行对比分析,实验结果表明Monte-Carlo方法计算定积分适用范围广泛,计算效率高效.
Nasir, M.; Pratama, D.; Anam, C.; Haryanto, F.
2016-03-01
The aim of this research was to calculate Size Specific Dose Estimates (SSDE) generated by the varian OBI CBCT v1.4 X-ray tube working at 100 kV using EGSnrc Monte Carlo simulations. The EGSnrc Monte Carlo code used in this simulation was divided into two parts. Phase space file data resulted by the first part simulation became an input to the second part. This research was performed with varying phantom diameters of 5 to 35 cm and varying phantom lengths of 10 to 25 cm. Dose distribution data were used to calculate SSDE values using trapezoidal rule (trapz) function in a Matlab program. SSDE obtained from this calculation was compared to that in AAPM report and experimental data. It was obtained that the normalization of SSDE value for each phantom diameter was between 1.00 and 3.19. The normalization of SSDE value for each phantom length was between 0.96 and 1.07. The statistical error in this simulation was 4.98% for varying phantom diameters and 5.20% for varying phantom lengths. This study demonstrated the accuracy of the Monte Carlo technique in simulating the dose calculation. In the future, the influence of cylindrical phantom material to SSDE would be studied.
Global Depletion of Groundwater Resources: Past and Future Analyses
Bierkens, M. F.; de Graaf, I. E. M.; Van Beek, L. P.; Wada, Y.
2014-12-01
Globally, about 17% of the crops are irrigated, yet irrigation accounts for 40% of the global food production. As more than 40% of irrigation water comes from groundwater, groundwater abstraction rates are large and exceed natural recharge rates in many regions of the world, thus leading to groundwater depletion. In this paper we provide an overview of recent research on global groundwater depletion. We start with presenting various estimates of global groundwater depletion, both from flux based as well as volume based methods. We also present estimates of the contribution of non-renewable groundwater to irrigation water consumption and how this contribution developed during the last 50 years. Next, using a flux based method, we provide projections of groundwater depletion for the coming century under various socio-economic and climate scenarios. As groundwater depletion contributes to sea-level rise, we also provide estimates of this contribution from the past as well as for future scenarios. Finally, we show recent results of groundwater level changes and change in river flow as a result of global groundwater abstractions as obtained from a global groundwater flow model.
Energy Technology Data Exchange (ETDEWEB)
Ureba, A.; Palma, B. A.; Leal, A.
2011-07-01
Develop a more efficient method of optimization in relation to time, based on linear programming designed to implement a multi objective penalty function which also permits a simultaneous solution integrated boost situations considering two white volumes simultaneously.
Geometrical and Monte Carlo projectors in 3D PET reconstruction
Aguiar, Pablo; Rafecas López, Magdalena; Ortuno, Juan Enrique; Kontaxakis, George; Santos, Andrés; Pavía, Javier; Ros, Domènec
2010-01-01
Purpose: In the present work, the authors compare geometrical and Monte Carlo projectors in detail. The geometrical projectors considered were the conventional geometrical Siddon ray-tracer (S-RT) and the orthogonal distance-based ray-tracer (OD-RT), based on computing the orthogonal distance from the center of image voxel to the line-of-response. A comparison of these geometrical projectors was performed using different point spread function (PSF) models. The Monte Carlo-based method under c...
García de la Torre, Nuria; Durán, Alejandra; Del Valle, Laura; Fuentes, Manuel; Barca, Idoya; Martín, Patricia; Montañez, Carmen; Perez-Ferre, Natalia; Abad, Rosario; Sanz, Fuencisla; Galindo, Mercedes; Rubio, Miguel A; Calle-Pascual, Alfonso L
2013-08-01
The aims are to define the regression rate in newly diagnosed type 2 diabetes after lifestyle intervention and pharmacological therapy based on a SMBG (self-monitoring of blood glucose) strategy in routine practice as compared to standard HbA1c-based treatment and to assess whether a supervised exercise program has additional effects. St Carlos study is a 3-year, prospective, randomized, clinic-based, interventional study with three parallel groups. Hundred and ninety-five patients were randomized to the SMBG intervention group [I group; n = 130; Ia: SMBG (n = 65) and Ib: SMBG + supervised exercise (n = 65)] and to the HbA1c control group (C group) (n = 65). The primary outcome was to estimate the regression rate of type 2 diabetes (HbA1c 4 kg was 3.6 (1.8-7); p < 0.001. This study shows that the use of SMBG in an educational program effectively increases the regression rate in newly diagnosed type 2 diabetic patients after 3 years of follow-up. These data suggest that SMBG-based programs should be extended to primary care settings where diabetic patients are usually attended.
Energy Technology Data Exchange (ETDEWEB)
Arreola V, G. [IPN, Escuela Superior de Fisica y Matematicas, Posgrado en Ciencias Fisicomatematicas, area en Ingenieria Nuclear, Unidad Profesional Adolfo Lopez Mateos, Edificio 9, Col. San Pedro Zacatenco, 07730 Mexico D. F. (Mexico); Vazquez R, R.; Guzman A, J. R., E-mail: energia.arreola.uam@gmail.com [Universidad Autonoma Metropolitana, Unidad Iztapalapa, Area de Ingenieria en Recursos Energeticos, Av. San Rafael Atlixco 186, Col. Vicentina, 09340 Mexico D. F. (Mexico)
2012-10-15
In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., {mu}{omicron}=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)
Futter, M. N.; Lucas, R. W.; Egnell, G.; Holmström, H.; Laudon, H.; Nilsson, U.; Oni, S. K.; Lämâs, T.
2012-04-01
Intensive forest harvesting has the potential to remove base cations (BC; Ca, K, Mg and Na) from ecosystems more rapidly than they can be replaced through mineral weathering. For this reason, whole tree harvesting (i.e. branches and needles harvested) for biofuel production in Sweden and elsewhere may not be ecologically sustainable. Under some circumstances, excessive BC removal may lead to re-acidification of soil and surface waters and a reduction of the growth potential in subsequent forest rotations. There is considerable uncertainty in all components of stand-scale BC mass balance estimates associated with forest harvests. Estimates of weathering rates from a single site can range over more than an order of magnitude, deposition estimates are often poorly constrained and tree element concentrations can show considerable variation. Despite these uncertainties, BC dynamics play a key role in forest management and planning. The Heureka decision support system has been developed in Sweden for multi-criteria analysis of forest management scenarios. Heureka can be used to estimate timber production and economic return under a series of user-specified constraints. Here, we present a model application based on Heureka, a database of tree element concentrations, published weathering rate estimates and long-term monitoring data to estimate BC budgets and their associated uncertainty under a series of forest harvest scenarios at the Strömsjöliden production park in northern Sweden. We evaluated BC budgets under four long term forest management scenarios associated with "business as usual", more intensive production, nature conservation and reindeer husbandry. Despite the large amount of uncertainty, a number of trends emerged. Nature conservation and reindeer husbandry scenarios were, in general, more sustainable than the other scenarios. Model results suggested that stem-only harvest could remove BC more rapidly than they could be replaced by weathering at some
Depleted uranium disposal options evaluation
Energy Technology Data Exchange (ETDEWEB)
Hertzler, T.J.; Nishimoto, D.D.; Otis, M.D. [Science Applications International Corp., Idaho Falls, ID (United States). Waste Management Technology Div.
1994-05-01
The Department of Energy (DOE), Office of Environmental Restoration and Waste Management, has chartered a study to evaluate alternative management strategies for depleted uranium (DU) currently stored throughout the DOE complex. Historically, DU has been maintained as a strategic resource because of uses for DU metal and potential uses for further enrichment or for uranium oxide as breeder reactor blanket fuel. This study has focused on evaluating the disposal options for DU if it were considered a waste. This report is in no way declaring these DU reserves a ``waste,`` but is intended to provide baseline data for comparison with other management options for use of DU. To PICS considered in this report include: Retrievable disposal; permanent disposal; health hazards; radiation toxicity and chemical toxicity.
Quantum Monte Carlo using a Stochastic Poisson Solver
Energy Technology Data Exchange (ETDEWEB)
Das, D; Martin, R M; Kalos, M H
2005-05-06
Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.
Krakowska, B; Custers, D; Deconinck, E; Daszykowski, M
2016-02-07
The aim of this work was to develop a general framework for the validation of discriminant models based on the Monte Carlo approach that is used in the context of authenticity studies based on chromatographic impurity profiles. The performance of the validation approach was applied to evaluate the usefulness of the diagnostic logic rule obtained from the partial least squares discriminant model (PLS-DA) that was built to discriminate authentic Viagra® samples from counterfeits (a two-class problem). The major advantage of the proposed validation framework stems from the possibility of obtaining distributions for different figures of merit that describe the PLS-DA model such as, e.g., sensitivity, specificity, correct classification rate and area under the curve in a function of model complexity. Therefore, one can quickly evaluate their uncertainty estimates. Moreover, the Monte Carlo model validation allows balanced sets of training samples to be designed, which is required at the stage of the construction of PLS-DA and is recommended in order to obtain fair estimates that are based on an independent set of samples. In this study, as an illustrative example, 46 authentic Viagra® samples and 97 counterfeit samples were analyzed and described by their impurity profiles that were determined using high performance liquid chromatography with photodiode array detection and further discriminated using the PLS-DA approach. In addition, we demonstrated how to extend the Monte Carlo validation framework with four different variable selection schemes: the elimination of uninformative variables, the importance of a variable in projections, selectivity ratio and significance multivariate correlation. The best PLS-DA model was based on a subset of variables that were selected using the variable importance in the projection approach. For an independent test set, average estimates with the corresponding standard deviation (based on 1000 Monte Carlo runs) of the correct
Energy Technology Data Exchange (ETDEWEB)
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Monte Carlo methods for electromagnetics
Sadiku, Matthew NO
2009-01-01
Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...
Energy Technology Data Exchange (ETDEWEB)
Nomoto, S. [Japan Oil Development Co. Ltd., Tokyo (Japan); Fujita, K. [The University of Tokyo, Tokyo (Japan)
1997-05-01
Data for large giant oil fields with minable reserves of one billion barrels or more were accumulated to structure a new oil field depletion model and estimate production in each oil field. As a result of analyzing events recognized in large giant oil fields, necessity was made clear to correct the conventional oil depletion model. The newly proposed model changes definitions on the depletion period of time, depletion rate, build-up production (during a time period in which production rate increases) and production in a plateau (a time period in which production becomes constant). Two hundred and twenty-five large giant oil fields were classified into those in a depletion period, an initial development phase, and a plateau period. The following findings were obtained as a result of trial calculations using the new model: under an assumption of demand growth rate of 1.5%, oil field groups in the initial development phase will reach the plateau production in the year 2002, and oil fields in the depletion period will continue production decline, hence the production amount after that year will slow down. Because the oil field groups in the plateau period will shift into decline in 2014, the overall production will decrease. The year 2014 is about ten years later than the estimation given recently by Campbell. Undiscovered resources are outside these discussions. 11 refs., 9 figs., 2 tabs.
Energy Technology Data Exchange (ETDEWEB)
Zhang Di; Zankl, Maria; DeMarco, John J.; Cagnon, Chris H.; Angel, Erin; Turner, Adam C.; McNitt-Gray, Michael F. [David Geffen School of Medicine at UCLA, Los Angeles, California 90024 (United States); German Research Center for Environmental Health (GmbH), Institute of Radiation Protection, Helmholtz Zentrum Muenchen, Ingolstaedter Landstrasse 1, 85764 Neuherberg (Germany); David Geffen School of Medicine at UCLA, Los Angeles, California 90024 (United States)
2009-12-15
Purpose: Previous work has demonstrated that there are significant dose variations with a sinusoidal pattern on the peripheral of a CTDI 32 cm phantom or on the surface of an anthropomorphic phantom when helical CT scanning is performed, resulting in the creation of ''hot'' spots or ''cold'' spots. The purpose of this work was to perform preliminary investigations into the feasibility of exploiting these variations to reduce dose to selected radiosensitive organs solely by varying the tube start angle in CT scans. Methods: Radiation dose to several radiosensitive organs (including breasts, thyroid, uterus, gonads, and eye lenses) resulting from MDCT scans were estimated using Monte Carlo simulation methods on voxelized patient models, including GSF's Baby, Child, and Irene. Dose to fetus was also estimated using four pregnant female models based on CT images of the pregnant patients. Whole-body scans were simulated using 120 kVp, 300 mAs, both 28.8 and 40 mm nominal collimations, and pitch values of 1.5, 1.0, and 0.75 under a wide range of start angles (0 deg. - 340 deg. in 20 deg. increments). The relationship between tube start angle and organ dose was examined for each organ, and the potential dose reduction was calculated. Results: Some organs exhibit a strong dose variation, depending on the tube start angle. For small peripheral organs (e.g., the eye lenses of the Baby phantom at pitch 1.5 with 40 mm collimation), the minimum dose can be 41% lower than the maximum dose, depending on the tube start angle. In general, larger dose reductions occur for smaller peripheral organs in smaller patients when wider collimation is used. Pitch 1.5 and pitch 0.75 have different mechanisms of dose reduction. For pitch 1.5 scans, the dose is usually lowest when the tube start angle is such that the x-ray tube is posterior to the patient when it passes the longitudinal location of the organ. For pitch 0.75 scans, the dose is lowest
DEPLETION POTENTIAL OF COLLOIDS:A DIRECT SIMULATION STUDY
Institute of Scientific and Technical Information of China (English)
LI; Wei-hua(
2001-01-01
［1］Asakura S, Oosawa F. Surface tension of high-poly-mer solution [J]. J Chem Phys, 1954, 22: 1255～ 1255.［2］Ye X, Narayanan T, Tong P, et al. Depletion interactions in colloid-polymer mixtures [J]. Phys Rev E, 1996, 54: 6500～6510.［3］Kaplan P D, Faucheux L P, Libchaber A J. Direct observation of the entropic potential in a binary suspension [J]. Phys Rev Lett, 1994, 73: 2793～2796.［4］Ohshima Y N, Sakagami H, Okumoto K, et al. Direct measurement of infinite simal depletion force in a colloid-polymer mixture by laser radiation pressure [J]. Phys Rev Lett, 1997, 78: 3963～3966.［5］Dinsmore A D, Yodh A G, Pine D J. Entropic control particle motion using passive surface microstructures [J]. Nature (London), 1996, 383: 239～242.［6］Dinsmore A D, Wong D T, Nelson P, et al. Hard spheres in vecicles: curvature-induced forces and particle-induced curvature [J]. Phys Rev Lett, 1998, 80: 409～412.［7］Gtzelmann B, Evans R, Dietrich S. Depletion forces in fluids [J]. Phys Rev E, 1998, 57: 6785～6800.［8］Miao Y, Cates M E, Lekkerkerker H N W. Depletion force in colloidal systems [J]. Physica A, 1995, 222: 10～24.［9］Biben J, Bladon P, Frenkel D. Depletion effects in binary hard-sphere fluids [J]. J Phys: Condens Matter, 1996, 8: 10799～10821.［10］Dickman R, Attard P, Simonian V. Entropic forces in binary hard sphere mixture: Theory and simulation [J]. J Chem Phys, 1997, 107: 205～213.［11］Bennett C H. Efficient estimation of free energy differences from Monte Carlo data [J]. J Comput Phys, 1976, 22: 245～268; see also Allen M P, Tildesley D J. Computer Simulation of Liquids (Chap.7) [M]. Oxford: Clarendon Press. 1994.
Monte Carlo methods for light propagation in biological tissues
Vinckenbosch, Laura; Lacaux, Céline; Tindel, Samy; Thomassin, Magalie; Obara, Tiphaine
2016-01-01
Light propagation in turbid media is driven by the equation of radiative transfer. We give a formal probabilistic representation of its solution in the framework of biological tissues and we implement algorithms based on Monte Carlo methods in order to estimate the quantity of light that is received by a homogeneous tissue when emitted by an optic fiber. A variance reduction method is studied and implemented, as well as a Markov chain Monte Carlo method based on the Metropolis–Hastings algori...
Ego depletion increases risk-taking.
Fischer, Peter; Kastenmüller, Andreas; Asal, Kathrin
2012-01-01
We investigated how the availability of self-control resources affects risk-taking inclinations and behaviors. We proposed that risk-taking often occurs from suboptimal decision processes and heuristic information processing (e.g., when a smoker suppresses or neglects information about the health risks of smoking). Research revealed that depleted self-regulation resources are associated with reduced intellectual performance and reduced abilities to regulate spontaneous and automatic responses (e.g., control aggressive responses in the face of frustration). The present studies transferred these ideas to the area of risk-taking. We propose that risk-taking is increased when individuals find themselves in a state of reduced cognitive self-control resources (ego-depletion). Four studies supported these ideas. In Study 1, ego-depleted participants reported higher levels of sensation seeking than non-depleted participants. In Study 2, ego-depleted participants showed higher levels of risk-tolerance in critical road traffic situations than non-depleted participants. In Study 3, we ruled out two alternative explanations for these results: neither cognitive load nor feelings of anger mediated the effect of ego-depletion on risk-taking. Finally, Study 4 clarified the underlying psychological process: ego-depleted participants feel more cognitively exhausted than non-depleted participants and thus are more willing to take risks. Discussion focuses on the theoretical and practical implications of these findings.
Metropolis Methods for Quantum Monte Carlo Simulations
Ceperley, D. M.
2003-01-01
Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...
Energy Technology Data Exchange (ETDEWEB)
Arribas de Paz, L. M.; Garcia Barquero, C.; Navarro Montesinos, J.; Cuerva Tejero, A.; Cruz Cruz, I.; Roque Lopez, V.; Marti Perez, I. [Ciemat. Madrid (Spain)
2000-07-01
The objective of the work is to model wind field in the surroundings of the Spanish Antarctic Base (BAE in the following). The need of such a work comes from the necessity of an energy source able to supply the energy demand in the BAE during the Antarctic winter. When the BAE is in operation (in the Antarctic summer) the energy supply comes from a diesel engine. In the Antartic winter the base is closed, but the demand of energy supply is growing up every year because of the increase in the number of technical and scientific machines that remain in the BAE taking different measurements. For this purpose the top of a closed hill called Pico Radio, not perturbed by close obstacles, has been chosen as the better site for the measurements. The measurement station is made up with a sonic an-emometer and a small wind generator to supply the energy needed by the sensors head heating of the anemometer. this way, it will be also used as a proof for the suitability of a wind generator in the new chosen site, under those special climactic conditions. (Author) 3 refs.
Depleted argon from underground sources
Energy Technology Data Exchange (ETDEWEB)
Back, H.O.; /Princeton U.; Alton, A.; /Augustana U. Coll.; Calaprice, F.; Galbiati, C.; Goretti, A.; /Princeton U.; Kendziora, C.; /Fermilab; Loer, B.; /Princeton U.; Montanari, D.; /Fermilab; Mosteiro, P.; /Princeton U.; Pordes, S.; /Fermilab
2011-09-01
Argon is a powerful scintillator and an excellent medium for detection of ionization. Its high discrimination power against minimum ionization tracks, in favor of selection of nuclear recoils, makes it an attractive medium for direct detection of WIMP dark matter. However, cosmogenic {sup 39}Ar contamination in atmospheric argon limits the size of liquid argon dark matter detectors due to pile-up. The cosmic ray shielding by the earth means that Argon from deep underground is depleted in {sup 39}Ar. In Cortez Colorado a CO{sub 2} well has been discovered to contain approximately 500ppm of argon as a contamination in the CO{sub 2}. In order to produce argon for dark matter detectors we first concentrate the argon locally to 3-5% in an Ar, N{sub 2}, and He mixture, from the CO{sub 2} through chromatographic gas separation. The N{sub 2} and He will be removed by continuous cryogenic distillation in the Cryogenic Distillation Column recently built at Fermilab. In this talk we will discuss the entire extraction and purification process; with emphasis on the recent commissioning and initial performance of the cryogenic distillation column purification.
Energy Technology Data Exchange (ETDEWEB)
Moradmand Jalali, Hamed; Bashiri, Hadis, E-mail: hbashiri@kashanu.ac.ir; Rasa, Hossein
2015-05-01
In the present study, the mechanism of free radical production by light-reflective agents in sunscreens (TiO{sub 2}, ZnO and ZrO{sub 2}) was obtained by applying kinetic Monte Carlo simulation. The values of the rate constants for each step of the suggested mechanism have been obtained by simulation. The effect of the initial concentration of mineral oxides and uric acid on the rate of uric acid photo-oxidation by irradiation of some sun care agents has been studied. The kinetic Monte Carlo simulation results agree qualitatively with the existing experimental data for the production of free radicals by sun care agents. - Highlights: • The mechanism and kinetics of uric acid photo-oxidation by irradiation of sun care agents has been obtained by simulation. • The mechanism has been used for free radical production of TiO{sub 2} (rutile and anatase), ZnO and ZrO{sub 2}. • The ratios of photo-activity of ZnO to anastase, rutile and ZrO have been obtained. • By doubling the initial concentrations of mineral oxide, the rate of reaction was doubled. • The optimum ratio of initial concentration of mineral oxides to uric acid has been obtained.
Heinrich, Josué Miguel; Niizawa, Ignacio; Botta, Fausto Adrián; Trombert, Alejandro Raúl; Irazoqui, Horacio Antonio
2012-01-01
In a previous study, we developed a methodology to assess the intrinsic optical properties governing the radiation field in algae suspensions. With these properties at our disposal, a Monte Carlo simulation program is developed and used in this study as a predictive autonomous program applied to the simulation of experiments that reproduce the common illumination conditions that are found in processes of large scale production of microalgae, especially when using open ponds such as raceway ponds. The simulation module is validated by comparing the results of experimental measurements made on artificially illuminated algal suspension with those predicted by the Monte Carlo program. This experiment deals with a situation that resembles that of an open pond or that of a raceway pond, except for the fact that for convenience, the experimental arrangement appears as if those reactors were turned upside down. It serves the purpose of assessing to what extent the scattering phenomena are important for the prediction of the spatial distribution of the radiant energy density. The simulation module developed can be applied to compute the local energy density inside photobioreactors with the goal to optimize its design and their operating conditions.
Information-Geometric Markov Chain Monte Carlo Methods Using Diffusions
Directory of Open Access Journals (Sweden)
Samuel Livingstone
2014-06-01
Full Text Available Recent work incorporating geometric ideas in Markov chain Monte Carlo is reviewed in order to highlight these advances and their possible application in a range of domains beyond statistics. A full exposition of Markov chains and their use in Monte Carlo simulation for statistical inference and molecular dynamics is provided, with particular emphasis on methods based on Langevin diffusions. After this, geometric concepts in Markov chain Monte Carlo are introduced. A full derivation of the Langevin diffusion on a Riemannian manifold is given, together with a discussion of the appropriate Riemannian metric choice for different problems. A survey of applications is provided, and some open questions are discussed.
Accelerated Monte Carlo by Embedded Cluster Dynamics
Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.
1991-07-01
We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.
Energy Technology Data Exchange (ETDEWEB)
Rinkel, J.; Dinten, J.M.; Tabary, J
2004-07-01
The use of focused anti-scatter grids on digital radiographic systems with two-dimensional detectors produces acquisitions with a decreased scatter to primary ratio and thus improved contrast and resolution. Simulation software is of great interest in optimizing grid configuration according to a specific application. Classical simulators are based on complete detailed geometric descriptions of the grid. They are accurate but very time consuming since they use Monte Carlo code to simulate scatter within the high-frequency grids. We propose a new practical method which couples an analytical simulation of the grid interaction with a radiographic system simulation program. First, a two dimensional matrix of probability depending on the grid is created offline, in which the first dimension represents the angle of impact with respect to the normal to the grid lines and the other the energy of the photon. This matrix of probability is then used by the Monte Carlo simulation software in order to provide the final scattered flux image. To evaluate the gain of CPU time, we define the increasing factor as the increase of CPU time of the simulation with as opposed to without the grid. Increasing factors were calculated with the new model and with classical methods representing the grid with its CAD model as part of the object. With the new method, increasing factors are shorter by one to two orders of magnitude compared with the second one. These results were obtained with a difference in calculated scatter of less than five percent between the new and the classical method. (authors)
The Chemistry and Toxicology of Depleted Uranium
Directory of Open Access Journals (Sweden)
Sidney A. Katz
2014-03-01
Full Text Available Natural uranium is comprised of three radioactive isotopes: 238U, 235U, and 234U. Depleted uranium (DU is a byproduct of the processes for the enrichment of the naturally occurring 235U isotope. The world wide stock pile contains some 1½ million tons of depleted uranium. Some of it has been used to dilute weapons grade uranium (~90% 235U down to reactor grade uranium (~5% 235U, and some of it has been used for heavy tank armor and for the fabrication of armor-piercing bullets and missiles. Such weapons were used by the military in the Persian Gulf, the Balkans and elsewhere. The testing of depleted uranium weapons and their use in combat has resulted in environmental contamination and human exposure. Although the chemical and the toxicological behaviors of depleted uranium are essentially the same as those of natural uranium, the respective chemical forms and isotopic compositions in which they usually occur are different. The chemical and radiological toxicity of depleted uranium can injure biological systems. Normal functioning of the kidney, liver, lung, and heart can be adversely affected by depleted uranium intoxication. The focus of this review is on the chemical and toxicological properties of depleted and natural uranium and some of the possible consequences from long term, low dose exposure to depleted uranium in the environment.
Rallapalli, Arjun
A RET network consists of a network of photo-active molecules called chromophores that can participate in inter-molecular energy transfer called resonance energy transfer (RET). RET networks are used in a variety of applications including cryptographic devices, storage systems, light harvesting complexes, biological sensors, and molecular rulers. In this dissertation, we focus on creating a RET device called closed-diffusive exciton valve (C-DEV) in which the input to output transfer function is controlled by an external energy source, similar to a semiconductor transistor like the MOSFET. Due to their biocompatibility, molecular devices like the C-DEVs can be used to introduce computing power in biological, organic, and aqueous environments such as living cells. Furthermore, the underlying physics in RET devices are stochastic in nature, making them suitable for stochastic computing in which true random distribution generation is critical. In order to determine a valid configuration of chromophores for the C-DEV, we developed a systematic process based on user-guided design space pruning techniques and built-in simulation tools. We show that our C-DEV is 15x better than C-DEVs designed using ad hoc methods that rely on limited data from prior experiments. We also show ways in which the C-DEV can be improved further and how different varieties of C-DEVs can be combined to form more complex logic circuits. Moreover, the systematic design process can be used to search for valid chromophore network configurations for a variety of RET applications. We also describe a feasibility study for a technique used to control the orientation of chromophores attached to DNA. Being able to control the orientation can expand the design space for RET networks because it provides another parameter to tune their collective behavior. While results showed limited control over orientation, the analysis required the development of a mathematical model that can be used to determine the
Institute of Scientific and Technical Information of China (English)
张乐成; 邵梅; 迟津愉; 宁宁宁
2012-01-01
Monte-Carlo方法是一种以概率统计理论为指导的非常重要的数值计算方法,基于Monte-Carlo方法计算定积分的算法是较常见定积分近似计算方法.本文针对计算数学常数e(自然对数的底)值的问题,选择一个特殊定积分分别用Monte-Carlo方法和Newton-Leibniz公式进行计算,通过对这两个计算结果进行比较分析,从中得到数学常数e计算方法.实验结果表明,该算法具有实效性,且有较好的准确率和时间效率.%Monte-Carlo method is a very important numerical method guided by a statistical probability theory. The algorithm based on Monte-Carlo method to calculate the definite integral is the more common method of approximate calculation of definite integrals. To the problem about calculating the value of the mathematical constant "e" (natural logarithm) , the paper uses a method to select a special set points and uses Monte-Carlo method and the Newton-Leibniz formula to calculate "e" in order to get the algorithm on the mathematical constant "e". The result shows that the algorithm is effective and is of better accuracy and time efficiency.
Institute of Scientific and Technical Information of China (English)
叶圣永; 王晓茹; 周曙; 刘志刚; 钱清泉
2012-01-01
研究了一种电力系统暂态稳定概率评估方法,提出了利用马尔可夫链蒙特卡罗方法模拟负荷水平,考虑随机序列之间的相关性;模拟过程中,提出以故障信息作为输入特征、基于AdaBoost-DT的暂态稳定评估方法。新英格兰39节点测试系统的仿真表明,本文提出的马尔可夫链蒙特卡罗方法比传统的蒙特卡罗方法更快速收敛,同时AdaBoost-DT大幅减少仿真时间,且能有效预测暂态稳定性。%In this paper, a power system probabilistic transient stability assessment was studied, and Markov Chain Monte Carlo method to emulation load level was put forward. Taking the relativity of random samples into account, this method was more suitable for actual power system. During simulation, transient stability assessment method is proposed based on AdaBoost-DT and took fault information as input features. The simulation of New England 39 bus test system shows Markov Chain Monte Carlo Method converges faster than traditional Monte Carlo method. At the same time, AdaBoost-DT can dramatically reduce emulation time and effectively forecast transient stability.
Energy Technology Data Exchange (ETDEWEB)
Kurosu, K [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Takashina, M; Koizumi, M [Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Das, I; Moskvin, V [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)
2014-06-01
Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health
High homocysteine induces betaine depletion.
Imbard, Apolline; Benoist, Jean-François; Esse, Ruben; Gupta, Sapna; Lebon, Sophie; de Vriese, An S; de Baulny, Helene Ogier; Kruger, Warren; Schiff, Manuel; Blom, Henk J
2015-04-28
Betaine is the substrate of the liver- and kidney-specific betaine-homocysteine (Hcy) methyltransferase (BHMT), an alternate pathway for Hcy remethylation. We hypothesized that BHMT is a major pathway for homocysteine removal in cases of hyperhomocysteinaemia (HHcy). Therefore, we measured betaine in plasma and tissues from patients and animal models of HHcy of genetic and acquired cause. Plasma was collected from patients presenting HHcy without any Hcy interfering treatment. Plasma and tissues were collected from rat models of HHcy induced by diet and from a mouse model of cystathionine β-synthase (CBS) deficiency. S-adenosyl-methionine (AdoMet), S-adenosyl-homocysteine (AdoHcy), methionine, betaine and dimethylglycine (DMG) were quantified by ESI-LC-MS/MS. mRNA expression was quantified using quantitative real-time (QRT)-PCR. For all patients with diverse causes of HHcy, plasma betaine concentrations were below the normal values of our laboratory. In the diet-induced HHcy rat model, betaine was decreased in all tissues analysed (liver, brain, heart). In the mouse CBS deficiency model, betaine was decreased in plasma, liver, heart and brain, but was conserved in kidney. Surprisingly, BHMT expression and activity was decreased in liver. However, in kidney, BHMT and SLC6A12 expression was increased in CBS-deficient mice. Chronic HHcy, irrespective of its cause, induces betaine depletion in plasma and tissues (liver, brain and heart), indicating a global decrease in the body betaine pool. In kidney, betaine concentrations were not affected, possibly due to overexpression of the betaine transporter SLC6A12 where betaine may be conserved because of its crucial role as an osmolyte.
Reddell, Brandon
2015-01-01
Designing hardware to operate in the space radiation environment is a very difficult and costly activity. Ground based particle accelerators can be used to test for exposure to the radiation environment, one species at a time, however, the actual space environment cannot be duplicated because of the range of energies and isotropic nature of space radiation. The FLUKA Monte Carlo code is an integrated physics package based at CERN that has been under development for the last 40+ years and includes the most up-to-date fundamental physics theory and particle physics data. This work presents an overview of FLUKA and how it has been used in conjunction with ground based radiation testing for NASA and improve our understanding of secondary particle environments resulting from the interaction of space radiation with matter.
EXTENDED MONTE CARLO LOCALIZATION ALGORITHM FOR MOBILE SENSOR NETWORKS
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
A real-world localization system for wireless sensor networks that adapts for mobility and irregular radio propagation model is considered.The traditional range-based techniques and recent range-free localization schemes are not welt competent for localization in mobile sensor networks,while the probabilistic approach of Bayesian filtering with particle-based density representations provides a comprehensive solution to such localization problem.Monte Carlo localization is a Bayesian filtering method that approximates the mobile node’S location by a set of weighted particles.In this paper,an enhanced Monte Carlo localization algorithm-Extended Monte Carlo Localization (Ext-MCL) is suitable for the practical wireless network environment where the radio propagation model is irregular.Simulation results show the proposal gets better localization accuracy and higher localizable node number than previously proposed Monte Carlo localization schemes not only for ideal radio model,but also for irregular one.
Conversation with Juan Carlos Negrete.
Negrete, Juan Carlos
2013-08-01
Juan Carlos Negrete is Emeritus Professor of Psychiatry, McGill University; Founding Director, Addictions Unit, Montreal General Hospital; former President, Canadian Society of Addiction Medicine; and former WHO/PAHO Consultant on Alcoholism, Drug Addiction and Mental Health.
TARC: Carlo Rubbia's Energy Amplifier
Laurent Guiraud
1997-01-01
Transmutation by Adiabatic Resonance Crossing (TARC) is Carlo Rubbia's energy amplifier. This CERN experiment demonstrated that long-lived fission fragments, such as 99-TC, can be efficiently destroyed.
Beland, Laurent Karim; El-Mellouhi, Fedwa; Mousseau, Normand
2010-03-01
Using a topological classification of eventsfootnotetextB. D. McKay, Congressus Numerantium 30, 45 (1981). combined with the Activation-Relaxation Technique (ART nouveau) for the generation of diffusion pathways, the kinetic ART (k-ART)footnotetextF. El-Mellouhi, N. Mousseau and L. J. Lewis, Phys Rev B, 78,15 (2008). lifts many restrictions generally associated with standard kinetic Monte Carlo algorithms. In particular, it can treat on and off-lattice atomic positions and handles exactly long-range elastic deformation. Here we introduce a set of modifications to k-ART that reduce the computational cost of the algorithm to near order 1 and show applications of the algorithm to the diffusion of vacancy and interstitial complexes in large models of crystalline Si (100 000 atoms).
Yang, Ye; Soyemi, Olusola O.; Landry, Michelle R.; Soller, Babs R.
2005-01-01
The influence of fat thickness on the diffuse reflectance spectra of muscle in the near infrared (NIR) region is studied by Monte Carlo simulations of a two-layer structure and with phantom experiments. A polynomial relationship was established between the fat thickness and the detected diffuse reflectance. The influence of a range of optical coefficients (absorption and reduced scattering) for fat and muscle over the known range of human physiological values was also investigated. Subject-to-subject variation in the fat optical coefficients and thickness can be ignored if the fat thickness is less than 5 mm. A method was proposed to correct the fat thickness influence. c2005 Optical Society of America.
Li, Cheng; Zhao, Tianlun; Li, Cong; Mei, Lei; Yu, En; Dong, Yating; Chen, Jinhong; Zhu, Shuijin
2017-04-15
Near infrared (NIR) spectroscopy combined with Monte Carlo uninformative variable elimination (MC-UVE) and nonlinear calibration methods employed to determine gossypol content in cottonseeds were investigated. The reference method was performed by high performance liquid chromatography coupled to an ultraviolet detector (HPLC-UV). MC-UVE was employed to extract the effective information from the full NIR spectra. Nonlinear calibration methods were applied to establish the models compared with the linear method. The optimal model for gossypol content was obtained by MC-UVE-WLS-SVM, with root mean squares error of prediction (RMSEP) of 0.0422, coefficient of determination (R(2)) of 0.9331, and residual predictive deviation (RPD) of 3.8374, respectively, which was accurate and robust enough to substitute for traditional gossypol measurements. The nonlinear methods performed more reliable than linear method during the development of calibration models. Furthermore, MC-UVE could provide better and simpler calibration models than full spectra.
Qin, Mingpu; Zhang, Shiwei
2016-01-01
The vast majority of quantum Monte Carlo (QMC) calculations in interacting fermion systems require a constraint to control the sign problem. The constraint involves an input trial wave function which restricts the random walks. We introduce a systematically improvable constraint which relies on the fundamental role of the density or one-body density matrix. An independent-particle calculation is coupled to an auxiliary-field QMC calculation. The independent-particle solution is used as the constraint in QMC, which then produces the input density or density matrix for the next iteration. The constraint is optimized by the self-consistency between the many-body and independent-particle calculations. The approach is demonstrated in the two-dimensional Hubbard model by accurately determining the spin densities when collective modes separated by tiny energy scales are present in the magnetic and charge correlations. Our approach also provides an ab initio way to predict effective "U" parameters for independent-par...
Steinczinger, Zsuzsanna; Jóvári, Pál; Pusztai, László
2017-01-01
Neutron- and x-ray weighted total structure factors of liquid water have been calculated on the basis of the intermolecular parts of partial radial distribution functions resulting from various computer simulations. The approach includes reverse Monte Carlo (RMC) modelling of these partials, using realistic flexible molecules, and the calculation of experimental diffraction data, including the intramolecular contributions, from the RMC particle configurations. The procedure has been applied to ten sets of intermolecular partial radial distribution functions obtained from various computer simulations, including one set from an ab initio molecular dynamics, of water. It is found that modern polarizable water potentials, such as SWM4-DP and BK3 are the most successful in reproducing measured diffraction data.
Directory of Open Access Journals (Sweden)
Hai-Feng Zhang
2016-08-01
Full Text Available In this paper, the properties of photonic band gaps (PBGs in two types of two-dimensional plasma-dielectric photonic crystals (2D PPCs under a transverse-magnetic (TM wave are theoretically investigated by a modified plane wave expansion (PWE method where Monte Carlo method is introduced. The proposed PWE method can be used to calculate the band structures of 2D PPCs which possess arbitrary-shaped filler and any lattice. The efficiency and convergence of the present method are discussed by a numerical example. The configuration of 2D PPCs is the square lattices with fractal Sierpinski gasket structure whose constituents are homogeneous and isotropic. The type-1 PPCs is filled with the dielectric cylinders in the plasma background, while its complementary structure is called type-2 PPCs, in which plasma cylinders behave as the fillers in the dielectric background. The calculated results reveal that the enough accuracy and good convergence can be obtained, if the number of random sampling points of Monte Carlo method is large enough. The band structures of two types of PPCs with different fractal orders of Sierpinski gasket structure also are theoretically computed for a comparison. It is demonstrate that the PBGs in higher frequency region are more easily produced in the type-1 PPCs rather than in the type-2 PPCs. Sierpinski gasket structure introduced in the 2D PPCs leads to a larger cutoff frequency, enhances and induces more PBGs in high frequency region. The effects of configurational parameters of two types of PPCs on the PBGs are also investigated in detail. The results show that the PBGs of the PPCs can be easily manipulated by tuning those parameters. The present type-1 PPCs are more suitable to design the tunable compacted devices.
Zhang, Hai-Feng; Liu, Shao-Bin
2016-08-01
In this paper, the properties of photonic band gaps (PBGs) in two types of two-dimensional plasma-dielectric photonic crystals (2D PPCs) under a transverse-magnetic (TM) wave are theoretically investigated by a modified plane wave expansion (PWE) method where Monte Carlo method is introduced. The proposed PWE method can be used to calculate the band structures of 2D PPCs which possess arbitrary-shaped filler and any lattice. The efficiency and convergence of the present method are discussed by a numerical example. The configuration of 2D PPCs is the square lattices with fractal Sierpinski gasket structure whose constituents are homogeneous and isotropic. The type-1 PPCs is filled with the dielectric cylinders in the plasma background, while its complementary structure is called type-2 PPCs, in which plasma cylinders behave as the fillers in the dielectric background. The calculated results reveal that the enough accuracy and good convergence can be obtained, if the number of random sampling points of Monte Carlo method is large enough. The band structures of two types of PPCs with different fractal orders of Sierpinski gasket structure also are theoretically computed for a comparison. It is demonstrate that the PBGs in higher frequency region are more easily produced in the type-1 PPCs rather than in the type-2 PPCs. Sierpinski gasket structure introduced in the 2D PPCs leads to a larger cutoff frequency, enhances and induces more PBGs in high frequency region. The effects of configurational parameters of two types of PPCs on the PBGs are also investigated in detail. The results show that the PBGs of the PPCs can be easily manipulated by tuning those parameters. The present type-1 PPCs are more suitable to design the tunable compacted devices.
Ojala, Jarkko; Kapanen, Mika; Hyödynmaa, Simo
2016-06-01
New version 13.6.23 of the electron Monte Carlo (eMC) algorithm in Varian Eclipse™ treatment planning system has a model for 4MeV electron beam and some general improvements for dose calculation. This study provides the first overall accuracy assessment of this algorithm against full Monte Carlo (MC) simulations for electron beams from 4MeV to 16MeV with most emphasis on the lower energy range. Beams in a homogeneous water phantom and clinical treatment plans were investigated including measurements in the water phantom. Two different material sets were used with full MC: (1) the one applied in the eMC algorithm and (2) the one included in the Eclipse™ for other algorithms. The results of clinical treatment plans were also compared to those of the older eMC version 11.0.31. In the water phantom the dose differences against the full MC were mostly less than 3% with distance-to-agreement (DTA) values within 2mm. Larger discrepancies were obtained in build-up regions, at depths near the maximum electron ranges and with small apertures. For the clinical treatment plans the overall dose differences were mostly within 3% or 2mm with the first material set. Larger differences were observed for a large 4MeV beam entering curved patient surface with extended SSD and also in regions of large dose gradients. Still the DTA values were within 3mm. The discrepancies between the eMC and the full MC were generally larger for the second material set. The version 11.0.31 performed always inferiorly, when compared to the 13.6.23.
de Finetti Priors using Markov chain Monte Carlo computations.
Bacallado, Sergio; Diaconis, Persi; Holmes, Susan
2015-07-01
Recent advances in Monte Carlo methods allow us to revisit work by de Finetti who suggested the use of approximate exchangeability in the analyses of contingency tables. This paper gives examples of computational implementations using Metropolis Hastings, Langevin and Hamiltonian Monte Carlo to compute posterior distributions for test statistics relevant for testing independence, reversible or three way models for discrete exponential families using polynomial priors and Gröbner bases.
Monte Carlo Simulation of Optical Properties of Wake Bubbles
Institute of Scientific and Technical Information of China (English)
CAO Jing; WANG Jiang-An; JIANG Xing-Zhou; SHI Sheng-Wei
2007-01-01
Based on Mie scattering theory and the theory of multiple light scattering, the light scattering properties of air bubbles in a wake are analysed by Monte Carlo simulation. The results show that backscattering is enhanced obviously due to the existence of bubbles, especially with the increase of bubble density, and that it is feasible to use the Monte Carlo method to study the properties of light scattering by air bubbles.
Confidence and efficiency scaling in variational quantum Monte Carlo calculations
Delyon, F.; Bernu, B.; Holzmann, Markus
2017-02-01
Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time-discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by variational Monte Carlo calculations on the two-dimensional electron gas.
Confidence and efficiency scaling in Variational Quantum Monte Carlo calculations
Delyon, François; Holzmann, Markus
2016-01-01
Based on the central limit theorem, we discuss the problem of evaluation of the statistical error of Monte Carlo calculations using a time discretized diffusion process. We present a robust and practical method to determine the effective variance of general observables and show how to verify the equilibrium hypothesis by the Kolmogorov-Smirnov test. We then derive scaling laws of the efficiency illustrated by Variational Monte Carlo calculations on the two dimensional electron gas.
Public Infrastructure for Monte Carlo Simulation: publicMC@BATAN
Waskita, A A; Akbar, Z; Handoko, L T; 10.1063/1.3462759
2010-01-01
The first cluster-based public computing for Monte Carlo simulation in Indonesia is introduced. The system has been developed to enable public to perform Monte Carlo simulation on a parallel computer through an integrated and user friendly dynamic web interface. The beta version, so called publicMC@BATAN, has been released and implemented for internal users at the National Nuclear Energy Agency (BATAN). In this paper the concept and architecture of publicMC@BATAN are presented.
Monte Carlo method for solving a parabolic problem
Directory of Open Access Journals (Sweden)
Tian Yi
2016-01-01
Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.
Evaluation of three high abundance protein depletion kits for umbilical cord serum proteomics
Directory of Open Access Journals (Sweden)
Nie Jing
2011-05-01
Full Text Available Abstract Background High abundance protein depletion is a major challenge in the study of serum/plasma proteomics. Prior to this study, most commercially available kits for depletion of highly abundant proteins had only been tested and evaluated in adult serum/plasma, while the depletion efficiency on umbilical cord serum/plasma had not been clarified. Structural differences between some adult and fetal proteins (such as albumin make it likely that depletion approaches for adult and umbilical cord serum/plasma will be variable. Therefore, the primary purposes of the present study are to investigate the efficiencies of several commonly-used commercial kits during high abundance protein depletion from umbilical cord serum and to determine which kit yields the most effective and reproducible results for further proteomics research on umbilical cord serum. Results The immunoaffinity based kits (PROTIA-Sigma and 5185-Agilent displayed higher depletion efficiency than the immobilized dye based kit (PROTBA-Sigma in umbilical cord serum samples. Both the PROTIA-Sigma and 5185-Agilent kit maintained high depletion efficiency when used three consecutive times. Depletion by the PROTIA-Sigma Kit improved 2DE gel quality by reducing smeared bands produced by the presence of high abundance proteins and increasing the intensity of other protein spots. During image analysis using the identical detection parameters, 411 ± 18 spots were detected in crude serum gels, while 757 ± 43 spots were detected in depleted serum gels. Eight spots unique to depleted serum gels were identified by MALDI- TOF/TOF MS, seven of which were low abundance proteins. Conclusions The immunoaffinity based kits exceeded the immobilized dye based kit in high abundance protein depletion of umbilical cord serum samples and dramatically improved 2DE gel quality for detection of trace biomarkers.
DURABILITY OF DEPLETED URANIUM AGGREGATES (DUAGG) IN DUCRETE SHIELDING APPLICATIONS
Energy Technology Data Exchange (ETDEWEB)
Mattus, Catherine H.; Dole, Leslie R.
2003-02-27
The depleted uranium (DU) inventory in the United States exceeds 500,000 metric tonnes. To evaluate the possibilities for reuse of this stockpile of DU, the U.S. Department of Energy (DOE) has created a research and development program to address the disposition of its DU(1). One potential use for this stockpile material is in the fabrication of nuclear shielding casks for the storage, transport, and disposal of spent nuclear fuels. The use of the DU-based shielding would reduce the size and weight of the casks while allowing a level of protection from neutrons and gamma rays comparable to that afforded by steel and concrete. DUAGG (depleted uranium aggregate) is formed of depleted uranium dioxide (DUO2) sintered with a synthetic-basalt-based binder. This study was designed to investigate possible deleterious reactions that could occur between the cement paste and the DUAGG. After 13 months of exposure to a cement pore solution, no deleterious expansive mineral phases were observed to form either with the DUO2 or with the simulated-basalt sintering phases. In the early stages of these exposure tests, Oak Ridge National Laboratory preliminary results confirm that the surface reactions of this aggregate proceed more slowly than expected. This finding may indicate that DUAGG/DUCRETE (depleted uranium concrete) casks could have service lives sufficient to meet the projected needs of DOE and the commercial nuclear power industry.
DOUBLE SHELL TANK (DST) HYDROXIDE DEPLETION MODEL FOR CARBON DIOXIDE ABSORPTION
Energy Technology Data Exchange (ETDEWEB)
OGDEN DM; KIRCH NW
2007-10-31
This document generates a supernatant hydroxide ion depletion model based on mechanistic principles. The carbon dioxide absorption mechanistic model is developed in this report. The report also benchmarks the model against historical tank supernatant hydroxide data and vapor space carbon dioxide data. A comparison of the newly generated mechanistic model with previously applied empirical hydroxide depletion equations is also performed.
Growth and pigment accumulation in nutrient-depleted Isochrysis aff. galbana T-ISO
Mulders, K.J.M.; Weesepoel, Y.J.A.; Lamers, P.P.; Vincken, J.P.; Martens, D.E.; Wijffels, R.H.
2013-01-01
The effect of three different nutrient depletions (nitrogen, sulphur and magnesium) on the growth and pigment accumulation of the haptophyte Isochrysis aff. galbana (clone T-ISO) has been studied. Pigments were quantified based on RP-UHPLC-PDA-MSn analysis. All nutrient depletions led to reduced max
Three-dimensional modeling of the neutral gas depletion effect in a helicon discharge plasma
Kollasch, Jeffrey; Schmitz, Oliver; Norval, Ryan; Reiter, Detlev; Sovinec, Carl
2016-10-01
Helicon discharges provide an attractive radio-frequency driven regime for plasma, but neutral-particle dynamics present a challenge to extending performance. A neutral gas depletion effect occurs when neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. The Monte Carlo neutral particle tracking code EIRENE was setup for the MARIA helicon experiment at UW Madison to study its neutral particle dynamics. Prescribed plasma temperature and density profiles similar to those in the MARIA device are used in EIRENE to investigate the main causes of the neutral gas depletion effect. The most dominant plasma-neutral interactions are included so far, namely electron impact ionization of neutrals, charge exchange interactions of neutrals with plasma ions, and recycling at the wall. Parameter scans show how the neutral depletion effect depends on parameters such as Knudsen number, plasma density and temperature, and gas-surface interaction accommodation coefficients. Results are compared to similar analytic studies in the low Knudsen number limit. Plans to incorporate a similar Monte Carlo neutral model into a larger helicon modeling framework are discussed. This work is funded by the NSF CAREER Award PHY-1455210.
Institute of Scientific and Technical Information of China (English)
李满仓; 王侃; 姚栋
2012-01-01
两步法反应堆物理计算流程中,组件均匀化群常数显著影响堆芯计算精度.相比确定论方法,连续能量蒙特卡罗方法均匀化精确描述各种几何构型栅格,避免繁琐共振自屏计算,保留更多连续能量信息,不仅产生的群常数更精确,而且普适性也更强.作为实现连续能量蒙特卡罗组件均匀化的第一步,本文应用径迹长度方法统计计算一般群截面和群常数,提出并使用散射事件方法获得不能直接应用确定论方法计算群间散射截面和高阶勒让德系数,应用P1截面计算扩散系数.为还原两步法计算流程中组件在堆芯的临界状态,本文应用BN理论对均匀化群常数进行泄漏修正.在4种类型组件和简化压水堆堆芯上数值验证蒙特卡罗均匀化群常数.验证结果表明:连续能量蒙特卡罗方法组件均匀化群常数具有良好几何适应性,显著提高堆芯计算精度.%The efficiency of the standard two-step reactor physics calculation relies on the accuracy of multi-group constants from the assembly-level homogenization process. In contrast to the traditional deterministic methods, generating the homogenization cross sections via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data bank can be used for a wide range of applications, resulting in the versatility using Monte Carlo codes for homogenization. As the first stage to realize Monte Carlo based lattice homogenization, the track length scheme is used as the foundation of cross section generation, which is straight forward. The scattering matrix and Legendre components, however, require special techniques. The Scattering Event method was proposed to solve the problem. There are no continuous energy counterparts in the Monte Carlo calculation for neutron diffusion coefficients. P1 cross sections were used to calculate the diffusion
Depletion and capture: revisiting "the source of water derived from wells".
Konikow, L F; Leake, S A
2014-09-01
A natural consequence of groundwater withdrawals is the removal of water from subsurface storage, but the overall rates and magnitude of groundwater depletion and capture relative to groundwater withdrawals (extraction or pumpage) have not previously been well characterized. This study assesses the partitioning of long-term cumulative withdrawal volumes into fractions derived from storage depletion and capture, where capture includes both increases in recharge and decreases in discharge. Numerical simulation of a hypothetical groundwater basin is used to further illustrate some of Theis' (1940) principles, particularly when capture is constrained by insufficient available water. Most prior studies of depletion and capture have assumed that capture is unconstrained through boundary conditions that yield linear responses. Examination of real systems indicates that capture and depletion fractions are highly variable in time and space. For a large sample of long-developed groundwater systems, the depletion fraction averages about 0.15 and the capture fraction averages about 0.85 based on cumulative volumes. Higher depletion fractions tend to occur in more arid regions, but the variation is high and the correlation coefficient between average annual precipitation and depletion fraction for individual systems is only 0.40. Because 85% of long-term pumpage is derived from capture in these real systems, capture must be recognized as a critical factor in assessing water budgets, groundwater storage depletion, and sustainability of groundwater development. Most capture translates into streamflow depletion, so it can detrimentally impact ecosystems.
Using Supervised Learning to Improve Monte Carlo Integral Estimation
Tracey, Brendan; Alonso, Juan J
2011-01-01
Monte Carlo (MC) techniques are often used to estimate integrals of a multivariate function using randomly generated samples of the function. In light of the increasing interest in uncertainty quantification and robust design applications in aerospace engineering, the calculation of expected values of such functions (e.g. performance measures) becomes important. However, MC techniques often suffer from high variance and slow convergence as the number of samples increases. In this paper we present Stacked Monte Carlo (StackMC), a new method for post-processing an existing set of MC samples to improve the associated integral estimate. StackMC is based on the supervised learning techniques of fitting functions and cross validation. It should reduce the variance of any type of Monte Carlo integral estimate (simple sampling, importance sampling, quasi-Monte Carlo, MCMC, etc.) without adding bias. We report on an extensive set of experiments confirming that the StackMC estimate of an integral is more accurate than ...
Accelerating Monte Carlo Renderers by Ray Histogram Fusion
Directory of Open Access Journals (Sweden)
Mauricio Delbracio
2015-03-01
Full Text Available This paper details the recently introduced Ray Histogram Fusion (RHF filter for accelerating Monte Carlo renderers [M. Delbracio et al., Boosting Monte Carlo Rendering by Ray Histogram Fusion, ACM Transactions on Graphics, 33 (2014]. In this filter, each pixel in the image is characterized by the colors of the rays that reach its surface. Pixels are compared using a statistical distance on the associated ray color distributions. Based on this distance, it decides whether two pixels can share their rays or not. The RHF filter is consistent: as the number of samples increases, more evidence is required to average two pixels. The algorithm provides a significant gain in PSNR, or equivalently accelerates the rendering process by using many fewer Monte Carlo samples without observable bias. Since the RHF filter depends only on the Monte Carlo samples color values, it can be naturally combined with all rendering effects.
Alonso, Patricia; Iriondo, José María
2014-01-01
The Germplasm Bank of Universidad Rey Juan Carlos was created in 2008 and currently holds 235 accessions and 96 species. This bank focuses on the conservation of wild-plant communities and aims to conserve ex situ a representative sample of the plant biodiversity present in a habitat, emphasizing priority ecosystems identified by the Habitats Directive. It is also used to store plant material for research and teaching purposes. The collection consists of three subcollections, two representative of typical habitats in the center of the Iberian Peninsula: high-mountain pastures (psicroxerophylous pastures) and semi-arid habitats (gypsophylic steppes), and a third representative of the genus Lupinus. The high-mountain subcollection currently holds 153 accessions (63 species), the semi-arid subcollection has 76 accessions (29 species,) and the Lupinus subcollection has 6 accessions (4 species). All accessions are stored in a freezer at -18 °C in Kilner jars with silica gel. The Germplasm Bank of Universidad Rey Juan Carlos follows a quality control protocol which describes the workflow performed with seeds from seed collection to storage. All collectors are members of research groups with great experience in species identification. Herbarium specimens associated with seed accessions are preserved and 63% of the records have been georreferenced with GPS and radio points. The dataset provides unique information concerning the location of populations of plant species that form part of the psicroxerophylous pastures and gypsophylic steppes of Central Spain as well as populations of genus Lupinus in the Iberian Peninsula. It also provides relevant information concerning mean seed weight and seed germination values under specific incubation conditions. This dataset has already been used by researchers of the Area of Biodiversity and Conservation of URJC as a source of information for the design and implementation of experimental designs in these plant communities. Since
Polar stratospheric clouds and ozone depletion
Toon, Owen B.; Turco, Richard P.
1991-01-01
A review is presented of investigations into the correlation between the depletion of ozone and the formation of polar stratospheric clouds (PSCs). Satellite measurements from Nimbus 7 showed that over the years the depletion from austral spring to austral spring has generally worsened. Approximately 70 percent of the ozone above Antarctica, which equals about 3 percent of the earth's ozone, is lost during September and October. Various hypotheses for ozone depletion are discussed including the theory suggesting that chlorine compounds might be responsible for the ozone hole, whereby chlorine enters the atmosphere as a component of chlorofluorocarbons produced by humans. The three types of PSCs, nitric acid trihydrate, slowly cooling water-ice, and rapidly cooling water-ice clouds act as important components of the Antarctic ozone depletion. It is indicated that destruction of the ozone will be more severe each year for the next few decades, leading to a doubling in area of the Antarctic ozone hole.
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
A matrix stripping method for the conversion of in-situ gamma ray spectrum, obtained with portable Ge detector, to photon flux energy distribution is proposed. The detector response is fully described by its stripping matrix and full absorption efficiency curve. A charge collection efficiency function is introduced in the simulation to take into account the existence of a transition zone of increasing charge collection after the inactive Ge layer. Good agreement is obtained between simulated and experimental full absorption efficiencies. The characteristic stripping matrix is determined by Monte Carlo simulation for different incident photon energies using the Geant4 toolkit system. The photon flux energy distribution is deduced by stripping the measured spectrum of the partial absorption and cosmic ray events and then applying the full absorption efficiency curve. The stripping method is applied to a measured in-situ spectrum. The value of the absorbed dose rate in air deduced from the corresponding flux energy distribution agrees well with the value measured directly in-situ.
Seichter, Felicia; Vogt, Josef; Radermacher, Peter; Mizaikoff, Boris
2017-01-25
The calibration of analytical systems is time-consuming and the effort for daily calibration routines should therefore be minimized, while maintaining the analytical accuracy and precision. The 'calibration transfer' approach proposes to combine calibration data already recorded with actual calibrations measurements. However, this strategy was developed for the multivariate, linear analysis of spectroscopic data, and thus, cannot be applied to sensors with a single response channel and/or a non-linear relationship between signal and desired analytical concentration. To fill this gap for a non-linear calibration equation, we assume that the coefficients for the equation, collected over several calibration runs, are normally distributed. Considering that coefficients of an actual calibration are a sample of this distribution, only a few standards are needed for a complete calibration data set. The resulting calibration transfer approach is demonstrated for a fluorescence oxygen sensor and implemented as a hierarchical Bayesian model, combined with a Lagrange Multipliers technique and Monte-Carlo Markov-Chain sampling. The latter provides realistic estimates for coefficients and prediction together with accurate error bounds by simulating known measurement errors and system fluctuations. Performance criteria for validation and optimal selection of a reduced set of calibration samples were developed and lead to a setup which maintains the analytical performance of a full calibration. Strategies for a rapid determination of problems occurring in a daily calibration routine, are proposed, thereby opening the possibility of correcting the problem just in time.
Directory of Open Access Journals (Sweden)
Bin Zhou
Full Text Available It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs, a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF and potential impact fraction (PIF of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making.
5.0. Depletion, activation, and spent fuel source terms
Energy Technology Data Exchange (ETDEWEB)
Wieselquist, William A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2016-04-01
SCALE’s general depletion, activation, and spent fuel source terms analysis capabilities are enabled through a family of modules related to the main ORIGEN depletion/irradiation/decay solver. The nuclide tracking in ORIGEN is based on the principle of explicitly modeling all available nuclides and transitions in the current fundamental nuclear data for decay and neutron-induced transmutation and relies on fundamental cross section and decay data in ENDF/B VII. Cross section data for materials and reaction processes not available in ENDF/B-VII are obtained from the JEFF-3.0/A special purpose European activation library containing 774 materials and 23 reaction channels with 12,617 neutron-induced reactions below 20 MeV. Resonance cross section corrections in the resolved and unresolved range are performed using a continuous-energy treatment by data modules in SCALE. All nuclear decay data, fission product yields, and gamma-ray emission data are developed from ENDF/B-VII.1 evaluations. Decay data include all ground and metastable state nuclides with half-lives greater than 1 millisecond. Using these data sources, ORIGEN currently tracks 174 actinides, 1149 fission products, and 974 activation products. The purpose of this chapter is to describe the stand-alone capabilities and underlying methodology of ORIGEN—as opposed to the integrated depletion capability it provides in all coupled neutron transport/depletion sequences in SCALE, as described in other chapters.
Identifying water mass depletion in Northern Iraq observed by GRACE
Directory of Open Access Journals (Sweden)
G. Mulder
2014-10-01
Full Text Available Observations acquired by Gravity Recovery And Climate Experiment (GRACE mission indicate a mass loss of 31 ± 3 km3 or 130 ± 14 mm in Northern Iraq between 2007 and 2009. This data is used as an independent validation of a hydrologic model of the region including lake mass variations. We developed a rainfall–runoff model for five tributaries of the Tigris River, based on local geology and climate conditions. Model inputs are precipitation from Tropical Rainfall Measurement Mission (TRMM observations, and potential evaporation from GLDAS model parameters. Our model includes a representation of the karstified aquifers that cause large natural groundwater variations in this region. Observed river discharges were used to calibrate our model. In order to get the total mass variations, we corrected for lake mass variations derived from Moderate Resolution Imaging Spectroradiometer (MODIS in combination with satellite altimetry and some in-situ data. Our rainfall–runoff model confirms that Northern Iraq suffered a drought between 2007 and 2009 and is consistent with the mass loss observed by GRACE over that period. Also, GRACE observed the annual cycle predicted by the rainfall–runoff model. The total mass depletion seen by GRACE between 2007 and 2009 is mainly explained by a lake mass depletion of 74 ± 4 mm and a natural groundwater depletion of 37 ± 6 mm. Our findings indicate that man-made groundwater extraction has a minor influence in this region while depletion of lake mass and geology play a key role.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Depleted Bulk Heterojunction Colloidal Quantum Dot Photovoltaics
Barkhouse, D. Aaron R.
2011-05-26
The first solution-processed depleted bulk heterojunction colloidal quantum dot solar cells are presented. The architecture allows for high absorption with full depletion, thereby breaking the photon absorption/carrier extraction compromise inherent in planar devices. A record power conversion of 5.5% under simulated AM 1.5 illumination conditions is reported. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A theoretical model of atmospheric ozone depletion
Midya, S. K.; Jana, P. K.; Lahiri, T.
1994-01-01
A critical study on different ozone depletion and formation processes has been made and following important results are obtained: (i) From analysis it is shown that O3 concentration will decrease very minutely with time for normal atmosphere when [O], [O2] and UV-radiation remain constant. (ii) An empirical equation is established theoretically between the variation of ozone concentration and time. (iii) Special ozone depletion processes are responsible for the dramatic decrease of O3-concentration at Antarctica.
Depleted bulk heterojunction colloidal quantum dot photovoltaics
Energy Technology Data Exchange (ETDEWEB)
Barkhouse, D.A.R. [Department of Electrical and Computer Engineering, University of Toronto, 10 King' s College Road, Toronto, Ontario M5S 3G4 (Canada); IBM Thomas J. Watson Research Center, Kitchawan Road, Yorktown Heights, NY, 10598 (United States); Debnath, Ratan; Kramer, Illan J.; Zhitomirsky, David; Levina, Larissa; Sargent, Edward H. [Department of Electrical and Computer Engineering, University of Toronto, 10 King' s College Road, Toronto, Ontario M5S 3G4 (Canada); Pattantyus-Abraham, Andras G. [Department of Electrical and Computer Engineering, University of Toronto, 10 King' s College Road, Toronto, Ontario M5S 3G4 (Canada); Quantum Solar Power Corporation, 1055 W. Hastings, Ste. 300, Vancouver, BC, V6E 2E9 (Canada); Etgar, Lioz; Graetzel, Michael [Laboratory for Photonics and Interfaces, Institute of Chemical Sciences and Engineering, School of Basic Sciences, Swiss Federal Institute of Technology, CH-1015 Lausanne (Switzerland)
2011-07-26
The first solution-processed depleted bulk heterojunction colloidal quantum dot solar cells are presented. The architecture allows for high absorption with full depletion, thereby breaking the photon absorption/carrier extraction compromise inherent in planar devices. A record power conversion of 5.5% under simulated AM 1.5 illumination conditions is reported. (Copyright copyright 2011 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Anatomy of Depleted Interplanetary Coronal Mass Ejections
Kocher, M.; Lepri, S. T.; Landi, E.; Zhao, L.; Manchester, W. B., IV
2017-01-01
We report a subset of interplanetary coronal mass ejections (ICMEs) containing distinct periods of anomalous heavy-ion charge state composition and peculiar ion thermal properties measured by ACE/SWICS from 1998 to 2011. We label them “depleted ICMEs,” identified by the presence of intervals where C6+/C5+ and O7+/O6+ depart from the direct correlation expected after their freeze-in heights. These anomalous intervals within the depleted ICMEs are referred to as “Depletion Regions.” We find that a depleted ICME would be indistinguishable from all other ICMEs in the absence of the Depletion Region, which has the defining property of significantly low abundances of fully charged species of helium, carbon, oxygen, and nitrogen. Similar anomalies in the slow solar wind were discussed by Zhao et al. We explore two possibilities for the source of the Depletion Region associated with magnetic reconnection in the tail of a CME, using CME simulations of the evolution of two Earth-bound CMEs described by Manchester et al.
MONTE-CARLO BURNUP CALCULATION UNCERTAINTY QUANTIFICATION AND PROPAGATION DETERMINATION
Energy Technology Data Exchange (ETDEWEB)
Nichols, T.; Sternat, M.; Charlton, W.
2011-05-08
MONTEBURNS is a Monte-Carlo depletion routine utilizing MCNP and ORIGEN 2.2. Uncertainties exist in the MCNP transport calculation, but this information is not passed to the depletion calculation in ORIGEN or saved. To quantify this transport uncertainty and determine how it propagates between burnup steps, a statistical analysis of a multiple repeated depletion runs is performed. The reactor model chosen is the Oak Ridge Research Reactor (ORR) in a single assembly, infinite lattice configuration. This model was burned for a 25.5 day cycle broken down into three steps. The output isotopics as well as effective multiplication factor (k-effective) were tabulated and histograms were created at each burnup step using the Scott Method to determine the bin width. It was expected that the gram quantities and k-effective histograms would produce normally distributed results since they were produced from a Monte-Carlo routine, but some of results do not. The standard deviation at each burnup step was consistent between fission product isotopes as expected, while the uranium isotopes created some unique results. The variation in the quantity of uranium was small enough that, from the reaction rate MCNP tally, round off error occurred producing a set of repeated results with slight variation. Statistical analyses were performed using the {chi}{sup 2} test against a normal distribution for several isotopes and the k-effective results. While the isotopes failed to reject the null hypothesis of being normally distributed, the {chi}{sup 2} statistic grew through the steps in the k-effective test. The null hypothesis was rejected in the later steps. These results suggest, for a high accuracy solution, MCNP cell material quantities less than 100 grams and greater kcode parameters are needed to minimize uncertainty propagation and minimize round off effects.
Qin, Jianguo; Jiang, Li; Liu, Rong; Zhang, Xinwei; Ye, Bangjiao; Zhu, Tonghua
2015-01-01
The prompt gamma-ray spectrum from depleted uranium (DU) spherical shells induced by 14 MeV D-T neutrons is measured. Monte Carlo (MC) simulation gives the largest prompt gamma flux with the optimal thickness of the DU spherical shells 3-5 cm and the optimal frequency of neutron pulse 1 MHz. The method of time of flight and pulse shape coincidence with energy (DC-TOF) is proposed, and the subtraction of the background gamma-rays discussed in detail. The electron recoil spectrum and time spectrum of the prompt gamma-rays are obtained based on a 2"*2" BC501A liquid scintillator detector. The energy spectrum and time spectrum of prompt gamma-rays are obtained based on an iterative unfolding method that can remove the influence of {\\gamma}-rays response matrix and pulsed neutron shape. The measured time spectrum and the calculated results are roughly consistent with each other. Experimental prompt gamma-ray spectrum in the 0.4-3 MeV energy region agree well with MC simulation based on the ENDF/BVI.5 library, and ...
Qin, Jian-Guo; Lai, Cai-Feng; Jiang, Li; Liu, Rong Zhang, Xin-Wei; Ye, Bang-Jiao; Zhu, Tong-Hua
2016-01-01
The prompt γ-ray spectrum from depleted uranium (DU) spherical shells induced by 14 MeV D-T neutrons is measured. Monte Carlo (MC) simulation gives the largest prompt γ flux with the optimal thickness of the DU spherical shells 3-5 cm and the optimal frequency of neutron pulse 1 MHz. The method of time of flight and pulse shape coincidence with energy (DC-TOF) is proposed, and the subtraction of the background γ-rays discussed in detail. The electron recoil spectrum and time spectrum of the prompt γ-rays are obtained based on a 2″×2″ BC501A liquid scintillator detector. The energy spectrum and time spectrum of prompt γ-rays are obtained based on an iterative unfolding method that can remove the influence of γ-rays response matrix and pulsed neutron shape. The measured time spectrum and the calculated results are roughly consistent with each other. Experimental prompt γ-ray spectrum in the 0.4-3 MeV energy region agrees well with MC simulation based on the ENDF/BVI.5 library, and the discrepancies for the integral quantities of γ-rays of energy 0.4-1 MeV and 1-3 MeV are 9.2% and 1.1%, respectively. Supported by National Special Magnetic Confinement Fusion Energy Research, China (2015GB108001) and National Natural Science Foundation of China (91226104)
Equilibrium Statistics: Monte Carlo Methods
Kröger, Martin
Monte Carlo methods use random numbers, or ‘random’ sequences, to sample from a known shape of a distribution, or to extract distribution by other means. and, in the context of this book, to (i) generate representative equilibrated samples prior being subjected to external fields, or (ii) evaluate high-dimensional integrals. Recipes for both topics, and some more general methods, are summarized in this chapter. It is important to realize, that Monte Carlo should be as artificial as possible to be efficient and elegant. Advanced Monte Carlo ‘moves’, required to optimize the speed of algorithms for a particular problem at hand, are outside the scope of this brief introduction. One particular modern example is the wavelet-accelerated MC sampling of polymer chains [406].
The TESS camera: modeling and measurements with deep depletion devices
Woods, Deborah F.; Vanderspek, Roland; MacDonald, Robert; Morgan, Edward; Villasenor, Joel; Thayer, Carolyn; Burke, Barry; Chesbrough, Christian; Chrisp, Michael; Clark, Kristin; Furesz, Gabor; Gonzales, Alexandria; Nguyen, Tam; Prigozhin, Gregory; Primeau, Brian; Ricker, George; Sauerwein, Timothy; Suntharalingam, Vyshnavi
2016-07-01
The Transiting Exoplanet Survey Satellite, a NASA Explorer-class mission in development, will discover planets around nearby stars, most notably Earth-like planets with potential for follow up characterization. The all-sky survey requires a suite of four wide field-of-view cameras with sensitivity across a broad spectrum. Deep depletion CCDs with a silicon layer of 100 μm thickness serve as the camera detectors, providing enhanced performance in the red wavelengths for sensitivity to cooler stars. The performance of the camera is critical for the mission objectives, with both the optical system and the CCD detectors contributing to the realized image quality. Expectations for image quality are studied using a combination of optical ray tracing in Zemax and simulations in Matlab to account for the interaction of the incoming photons with the 100 μm silicon layer. The simulations include a probabilistic model to determine the depth of travel in the silicon before the photons are converted to photo-electrons, and a Monte Carlo approach to charge diffusion. The charge diffusion model varies with the remaining depth for the photo-electron to traverse and the strength of the intermediate electric field. The simulations are compared with laboratory measurements acquired by an engineering unit camera with the TESS optical design and deep depletion CCDs. In this paper we describe the performance simulations and the corresponding measurements taken with the engineering unit camera, and discuss where the models agree well in predicted trends and where there are differences compared to observations.
Energy Technology Data Exchange (ETDEWEB)
Parent, L [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom); Fielding, A L [School of Physical and Chemical Sciences, Queensland University of Technology, Brisbane (Australia); Dance, D R [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London (United Kingdom); Seco, J [Department of Radiation Oncology, Francis Burr Proton Therapy Center, Massachusetts General Hospital, Harvard Medical School, Boston (United States); Evans, P M [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom)
2007-07-21
For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference <1%, standard deviation <2%). The difference of pixel sensitivity between MC and FF methods was up to 12.2% (clinical setup) and 11.8% (modified setup). MC calibration provided images of open fields (5 x 5 to 20 x 20 cm{sup 2}) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm{sup 2} were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed.
Soligo, Riccardo
In this work, the insight provided by our sophisticated Full Band Monte Carlo simulator is used to analyze the behavior of state-of-art devices like GaN High Electron Mobility Transistors and Hot Electron Transistors. Chapter 1 is dedicated to the description of the simulation tool used to obtain the results shown in this work. Moreover, a separate section is dedicated the set up of a procedure to validate to the tunneling algorithm recently implemented in the simulator. Chapter 2 introduces High Electron Mobility Transistors (HEMTs), state-of-art devices characterized by highly non linear transport phenomena that require the use of advanced simulation methods. The techniques for device modeling are described applied to a recent GaN-HEMT, and they are validated with experimental measurements. The main techniques characterization techniques are also described, including the original contribution provided by this work. Chapter 3 focuses on a popular technique to enhance HEMTs performance: the down-scaling of the device dimensions. In particular, this chapter is dedicated to lateral scaling and the calculation of a limiting cutoff frequency for a device of vanishing length. Finally, Chapter 4 and Chapter 5 describe the modeling of Hot Electron Transistors (HETs). The simulation approach is validated by matching the current characteristics with the experimental one before variations of the layouts are proposed to increase the current gain to values suitable for amplification. The frequency response of these layouts is calculated, and modeled by a small signal circuit. For this purpose, a method to directly calculate the capacitance is developed which provides a graphical picture of the capacitative phenomena that limit the frequency response in devices. In Chapter 5 the properties of the hot electrons are investigated for different injection energies, which are obtained by changing the layout of the emitter barrier. Moreover, the large signal characterization of the
基于蒙特卡罗模拟的可转换债券定价研究%Monte Carlo-based pricing of convertible bonds
Institute of Scientific and Technical Information of China (English)
赵洋; 赵立臣
2009-01-01
The paper applied the least-square Monte Carlo method proposed by Longstaff,et al.to price convertible bond,so as to solve the problem of prcing the path-dependent clauses and American option features embeded in convertible bonds.Convertible bonds are complex hybrid securities being subject to equity risk,credit risk,and interest rate risk.In the established pricing model,the assumption of constant volatility is relaxed and the volatility is estimated with GARCH (1,1),according to TF model,the credit risk is represented with the credit risk spread,and the yield curve is estimsted with Nelson-Siegel method.By empirical research,it is found that the convertible bonds in China are underpriced by 2% to 3%.%使用Longstaff等提出的最小二乘蒙特卡罗方法为可转换债券定价,从而解决为可转换债券中路径依赖条款和美式期权进行定价的难题.可转换债券是复杂的结构化产品,同时受股价风险、信用风险和利率风险影响.在建立的定价模型中,股价过程放松波动率为常数的假设,用GARCH(1,1)模型估计波动率,信用风险根据TF模型用常数信用利差代表,收益率曲线用Nelson-Siegel方法估计.通过实证检验发现国内可转换债券市场存在低估,低估幅度在2%～3%之间.
Ferretti, A; Martignano, A; Simonato, F; Paiusco, M
2014-02-01
The aim of the present work was the validation of the VMC(++) Monte Carlo (MC) engine implemented in the Oncentra Masterplan (OMTPS) and used to calculate the dose distribution produced by the electron beams (energy 5-12 MeV) generated by the linear accelerator (linac) Primus (Siemens), shaped by a digital variable applicator (DEVA). The BEAMnrc/DOSXYZnrc (EGSnrc package) MC model of the linac head was used as a benchmark. Commissioning results for both MC codes were evaluated by means of 1D Gamma Analysis (2%, 2 mm), calculated with a home-made Matlab (The MathWorks) program, comparing the calculations with the measured profiles. The results of the commissioning of OMTPS were good [average gamma index (γ) > 97%]; some mismatches were found with large beams (size ≥ 15 cm). The optimization of the BEAMnrc model required to increase the beam exit window to match the calculated and measured profiles (final average γ > 98%). Then OMTPS dose distribution maps were compared with DOSXYZnrc with a 2D Gamma Analysis (3%, 3 mm), in 3 virtual water phantoms: (a) with an air step, (b) with an air insert, and (c) with a bone insert. The OMTPD and EGSnrc dose distributions with the air-water step phantom were in very high agreement (γ ∼ 99%), while for heterogeneous phantoms there were differences of about 9% in the air insert and of about 10-15% in the bone region. This is due to the Masterplan implementation of VMC(++) which reports the dose as "dose to water", instead of "dose to medium".
Directory of Open Access Journals (Sweden)
D. G. Partridge
2012-03-01
Full Text Available This paper presents a novel approach to investigate cloud-aerosol interactions by coupling a Markov chain Monte Carlo (MCMC algorithm to an adiabatic cloud parcel model. Despite the number of numerical cloud-aerosol sensitivity studies previously conducted few have used statistical analysis tools to investigate the global sensitivity of a cloud model to input aerosol physiochemical parameters. Using numerically generated cloud droplet number concentration (CDNC distributions (i.e. synthetic data as cloud observations, this inverse modelling framework is shown to successfully estimate the correct calibration parameters, and their underlying posterior probability distribution.
The employed analysis method provides a new, integrative framework to evaluate the global sensitivity of the derived CDNC distribution to the input parameters describing the lognormal properties of the accumulation mode aerosol and the particle chemistry. To a large extent, results from prior studies are confirmed, but the present study also provides some additional insights. There is a transition in relative sensitivity from very clean marine Arctic conditions where the lognormal aerosol parameters representing the accumulation mode aerosol number concentration and mean radius and are found to be most important for determining the CDNC distribution to very polluted continental environments (aerosol concentration in the accumulation mode >1000 cm^{−3} where particle chemistry is more important than both number concentration and size of the accumulation mode.
The competition and compensation between the cloud model input parameters illustrates that if the soluble mass fraction is reduced, the aerosol number concentration, geometric standard deviation and mean radius of the accumulation mode must increase in order to achieve the same CDNC distribution.
This study demonstrates that inverse modelling provides a flexible, transparent and
Punt, Ans; Paini, Alicia; Spenkelink, Albertus; Scholz, Gabriele; Schilter, Benoit; van Bladeren, Peter J; Rietjens, Ivonne M C M
2016-04-18
Estragole is a known hepatocarcinogen in rodents at high doses following metabolic conversion to the DNA-reactive metabolite 1'-sulfooxyestragole. The aim of the present study was to model possible levels of DNA adduct formation in (individual) humans upon exposure to estragole. This was done by extending a previously defined PBK model for estragole in humans to include (i) new data on interindividual variation in the kinetics for the major PBK model parameters influencing the formation of 1'-sulfooxyestragole, (ii) an equation describing the relationship between 1'-sulfooxyestragole and DNA adduct formation, (iii) Monte Carlo modeling to simulate interindividual human variation in DNA adduct formation in the population, and (iv) a comparison of the predictions made to human data on DNA adduct formation for the related alkenylbenzene methyleugenol. Adequate model predictions could be made, with the predicted DNA adduct levels at the estimated daily intake of estragole of 0.01 mg/kg bw ranging between 1.6 and 8.8 adducts in 10(8) nucleotides (nts) (50th and 99th percentiles, respectively). This is somewhat lower than values reported in the literature for the related alkenylbenzene methyleugenol in surgical human liver samples. The predicted levels seem to be below DNA adduct levels that are linked with tumor formation by alkenylbenzenes in rodents, which were estimated to amount to 188-500 adducts per 10(8) nts at the BMD10 values of estragole and methyleugenol. Although this does not seem to point to a significant health concern for human dietary exposure, drawing firm conclusions may have to await further validation of the model's predictions.
Monte Carlo Hamiltonian: Linear Potentials
Institute of Scientific and Technical Information of China (English)
LUO Xiang-Qian; LIU Jin-Jiang; HUANG Chun-Qing; JIANG Jun-Qin; Helmut KROGER
2002-01-01
We further study the validity of the Monte Carlo Hamiltonian method. The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach, is its capability to study the excited states. Weconsider two quantum mechanical models: a symmetric one V(x) = |x|/2; and an asymmetric one V(x) = ∞, forx ＜ 0 and V(x) = x, for x ≥ 0. The results for the spectrum, wave functions and thermodynamical observables are inagreement with the analytical or Runge-Kutta calculations.
Depleted uranium hexafluoride: The source material for advanced shielding systems
Energy Technology Data Exchange (ETDEWEB)
Quapp, W.J.; Lessing, P.A. [Idaho National Engineering Lab., Idaho Falls, ID (United States); Cooley, C.R. [Department of Technology, Germantown, MD (United States)
1997-02-01
The U.S. Department of Energy (DOE) has a management challenge and financial liability problem in the form of 50,000 cylinders containing 555,000 metric tons of depleted uranium hexafluoride (UF{sub 6}) that are stored at the gaseous diffusion plants. DOE is evaluating several options for the disposition of this UF{sub 6}, including continued storage, disposal, and recycle into a product. Based on studies conducted to date, the most feasible recycle option for the depleted uranium is shielding in low-level waste, spent nuclear fuel, or vitrified high-level waste containers. Estimates for the cost of disposal, using existing technologies, range between $3.8 and $11.3 billion depending on factors such as the disposal site and the applicability of the Resource Conservation and Recovery Act (RCRA). Advanced technologies can reduce these costs, but UF{sub 6} disposal still represents large future costs. This paper describes an application for depleted uranium in which depleted uranium hexafluoride is converted into an oxide and then into a heavy aggregate. The heavy uranium aggregate is combined with conventional concrete materials to form an ultra high density concrete, DUCRETE, weighing more than 400 lb/ft{sup 3}. DUCRETE can be used as shielding in spent nuclear fuel/high-level waste casks at a cost comparable to the lower of the disposal cost estimates. Consequently, the case can be made that DUCRETE shielded casks are an alternative to disposal. In this case, a beneficial long term solution is attained for much less than the combined cost of independently providing shielded casks and disposing of the depleted uranium. Furthermore, if disposal is avoided, the political problems associated with selection of a disposal location are also avoided. Other studies have also shown cost benefits for low level waste shielded disposal containers.
Institute of Scientific and Technical Information of China (English)
杨波; 孔旭东; 魏贤顶; 孟东; 陈建江
2014-01-01
目的：探讨蒙特卡罗算法在乳腺癌术中放疗（IORT）模型剂量学优化中的应用价值。方法采用MCTP的MCBEAM程序建立乳腺癌术中放疗模型，利用MCSIM程序对患者术前CT模拟术中影像模型进行剂量计算，分析其剂量学特点，并对靶区剂量进行优化。结果通过蒙卡计算，优化的乳腺癌术中放疗模型方案为：靶区表面添加2~3 mm等效材料，靶区后缘添加5 mm等效材料再加2 mm铅板，这可以使90%以上等剂量线包绕整个靶区，同时可以消除＞110%的热点区域，肺最大剂量＜1 Gy。结论蒙特卡罗算法在乳腺癌IORT模型剂量学优化中的应用能显著提高IORT靶区剂量的计算精度，优化剂量分布，值得临床推广。%ObjectiveToexplorethevalueofMonteCarloalgorithminthedoseoptimizationofIORTmodel forbreastcancer. MethodsTheIORTmodelforbreastcancerwasestablishedwithMCTPMCBEAM program. Then the dose calculation of the intraoperative image simulated by CT was conducted with MCBEAM program to analyze the dosimetric characteristics and optimize the target dose. ResuIts Based on Monte Carlo calculation, the optimization scheme of IORT model for breast cancer was designed as the follow:adding 2~3mmequivalentmaterialonthetargetsurfaceandadding 5mmequivalentmaterialand 2mm leadplateonthetrailingedgeofthetarget.Thustheentiretargetregioncanbesurroundedbymorethan 90%isodoselineandmorethan 110%hotregioncanbeeliminatedwhilethemaxiumdoseofthelungislessthan 1 Gy. ConcIusion The application of Monte Carlo algorithm in the dose optimization of IORT model for breast cancer can significantly improve the calculation precision of the target dose and optimize the dose distribution, which indicates that Monte Carlo algorithm is worth to be promoted in clinic.
Fully depleted back-illuminated p-channel CCD development
Energy Technology Data Exchange (ETDEWEB)
Bebek, Chris J.; Bercovitz, John H.; Groom, Donald E.; Holland, Stephen E.; Kadel, Richard W.; Karcher, Armin; Kolbe, William F.; Oluseyi, Hakeem M.; Palaio, Nicholas P.; Prasad, Val; Turko, Bojan T.; Wang, Guobin
2003-07-08
An overview of CCD development efforts at Lawrence Berkeley National Laboratory is presented. Operation of fully-depleted, back-illuminated CCD's fabricated on high resistivity silicon is described, along with results on the use of such CCD's at ground-based observatories. Radiation damage and point-spread function measurements are described, as well as discussion of CCD fabrication technologies.
Banerjee, Debapriya; Yang, Jian; Schweizer, Kenneth S
2015-12-21
We employ a hybrid Monte Carlo plus integral equation theory approach to study how dense fluids of small nanoparticles or polymer chains mediate entropic depletion interactions between topographically rough particles where all interaction potentials are hard core repulsion. The corrugated particle surfaces are composed of densely packed beads which present variable degrees of controlled topographic roughness and free volume associated with their geometric crevices. This pure entropy problem is characterized by competing ideal translational and (favorable and unfavorable) excess entropic contributions. Surface roughness generically reduces particle depletion aggregation relative to the smooth hard sphere case. However, the competition between ideal and excess packing entropy effects in the bulk, near the particle surface and in the crevices, results in a non-monotonic variation of the particle-monomer packing correlation function as a function of the two dimensionless length scale ratios that quantify the effective surface roughness. As a result, the inter-particle potential of mean force (PMF), second virial coefficient, and spinodal miscibility volume fraction vary non-monotonically with the surface bead to monomer diameter and particle core to surface bead diameter ratios. A miscibility window is predicted corresponding to an optimum degree of surface roughness that completely destroys depletion attraction resulting in a repulsive PMF. Variation of the (dense) matrix packing fraction can enhance or suppress particle miscibility depending upon the amount of surface roughness. Connecting the monomers into polymer chains destabilizes the system via