Energy Technology Data Exchange (ETDEWEB)
Motalab, Mohammad Abdul; Kim, Woosong; Kim, Yonghee, E-mail: yongheekim@kaist.ac.kr
2015-12-15
Highlights: • The PCR of the CANDU6 reactor is slightly negative at low power, e.g. <80% P. • Doppler broadening of scattering resonances improves noticeably the FTC and make the PCR more negative or less positive in CANDU6. • The elevated inlet coolant condition can worsen significantly the PCR of CANDU6. • Improved design tools are needed for the safety evaluation of CANDU6 reactor. - Abstract: The power coefficient of reactivity (PCR) is a very important parameter for inherent safety and stability of nuclear reactors. The combined effect of a relatively less negative fuel temperature coefficient and a positive coolant temperature coefficient make the CANDU6 (CANada Deuterium Uranium) PCR very close to zero. In the original CANDU6 design, the PCR was calculated to be clearly negative. However, the latest physics design tools predict that the PCR is slightly positive for a wide operational range of reactor power. It is upon this contradictory observation that the CANDU6 PCR is re-evaluated in this work. In our previous study, the CANDU6 PCR was evaluated through a standard lattice analysis at mid-burnup and was found to be negative at low power. In this paper, the study was extended to a detailed 3-D CANDU6 whole-core model using the Monte Carlo code Serpent2. The Doppler broadening rejection correction (DBRC) method was implemented in the Serpent2 code in order to take into account thermal motion of the heavy uranium nucleus in the neutron-U scattering reactions. Time-average equilibrium core was considered for the evaluation of the representative PCR of CANDU6. Two thermal hydraulic models were considered in this work: one at design condition and the other at operating condition. Bundle-wise distributions of the coolant properties are modeled and the bundle-wise fuel temperature is also considered in this study. The evaluated nuclear data library ENDF/B-VII.0 was used throughout this Serpent2 evaluation. In these Monte Carlo calculations, a large number
Geometrical and Monte Carlo projectors in 3D PET reconstruction
Aguiar, Pablo; Rafecas López, Magdalena; Ortuno, Juan Enrique; Kontaxakis, George; Santos, Andrés; Pavía, Javier; Ros, Domènec
2010-01-01
Purpose: In the present work, the authors compare geometrical and Monte Carlo projectors in detail. The geometrical projectors considered were the conventional geometrical Siddon ray-tracer (S-RT) and the orthogonal distance-based ray-tracer (OD-RT), based on computing the orthogonal distance from the center of image voxel to the line-of-response. A comparison of these geometrical projectors was performed using different point spread function (PSF) models. The Monte Carlo-based method under c...
A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison
Jaboulay, J.-C.; Damian, F.; Douce, S.; Lopez, F.; Guenaut, C.; Aggery, A.; Poinot-Salanon, C.
2014-06-01
Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4®; to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4®; in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23,000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selcted (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. In these conditions two main subjects will be discussed: the Monte Carlo variance calculation and the assessment of the diffusion operator with two energy groups for the core calculation.
3D Monte Carlo radiation transfer modelling of photodynamic therapy
Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry
2015-06-01
The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.
Fang, Qianqian; Boas, David A
2009-10-26
We report a parallel Monte Carlo algorithm accelerated by graphics processing units (GPU) for modeling time-resolved photon migration in arbitrary 3D turbid media. By taking advantage of the massively parallel threads and low-memory latency, this algorithm allows many photons to be simulated simultaneously in a GPU. To further improve the computational efficiency, we explored two parallel random number generators (RNG), including a floating-point-only RNG based on a chaotic lattice. An efficient scheme for boundary reflection was implemented, along with the functions for time-resolved imaging. For a homogeneous semi-infinite medium, good agreement was observed between the simulation output and the analytical solution from the diffusion theory. The code was implemented with CUDA programming language, and benchmarked under various parameters, such as thread number, selection of RNG and memory access pattern. With a low-cost graphics card, this algorithm has demonstrated an acceleration ratio above 300 when using 1792 parallel threads over conventional CPU computation. The acceleration ratio drops to 75 when using atomic operations. These results render the GPU-based Monte Carlo simulation a practical solution for data analysis in a wide range of diffuse optical imaging applications, such as human brain or small-animal imaging.
Vectorized Monte Carlo methods for reactor lattice analysis
Brown, F. B.
1984-01-01
Some of the new computational methods and equivalent mathematical representations of physics models used in the MCV code, a vectorized continuous-enery Monte Carlo code for use on the CYBER-205 computer are discussed. While the principal application of MCV is the neutronics analysis of repeating reactor lattices, the new methods used in MCV should be generally useful for vectorizing Monte Carlo for other applications. For background, a brief overview of the vector processing features of the CYBER-205 is included, followed by a discussion of the fundamentals of Monte Carlo vectorization. The physics models used in the MCV vectorized Monte Carlo code are then summarized. The new methods used in scattering analysis are presented along with details of several key, highly specialized computational routines. Finally, speedups relative to CDC-7600 scalar Monte Carlo are discussed.
Bayesian phylogeny analysis via stochastic approximation Monte Carlo
Cheon, Sooyoung
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.
Bayesian phylogeny analysis via stochastic approximation Monte Carlo.
Cheon, Sooyoung; Liang, Faming
2009-11-01
Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time.
Accelerated 3D Monte Carlo light dosimetry using a graphics processing unit (GPU) cluster
Lo, William Chun Yip; Lilge, Lothar
2010-11-01
This paper presents a basic computational framework for real-time, 3-D light dosimetry on graphics processing unit (GPU) clusters. The GPU-based approach offers a direct solution to overcome the long computation time preventing Monte Carlo simulations from being used in complex optimization problems such as treatment planning, particularly if simulated annealing is employed as the optimization algorithm. The current multi- GPU implementation is validated using a commercial light modelling software (ASAP from Breault Research Organization). It also supports the latest Fermi GPU architecture and features an interactive 3-D visualization interface. The software is available for download at http://code.google.com/p/gpu3d.
Monte Carlo techniques for time-dependent radiative transfer in 3-D supernovae
Lucy, L B
2004-01-01
Monte Carlo techniques based on indivisible energy packets are described for computing light curves and spectra for 3-D supernovae. The radiative transfer is time-dependent and includes all effects of O(v/c). Monte Carlo quantization is achieved by discretizing the initial distribution of 56Ni into radioactive pellets. Each pellet decays with the emission of a single energy packet comprising gamma-ray photons representing one line from either the 56Ni or the 56Co decay spectrum. Subsequently, these energy packets propagate through the homologously-expanding ejecta with appropriate changes in the nature of their contained energy as they undergo Compton scatterings and pure absorptions. The 3-D code is tested by applying it to a spherically-symmetric SN in which the transfer of optical radiation is treated with a grey absorption coefficient. This 1-D problem is separately solved using Castor's co-moving frame moment equations. Satisfactory agreement is obtained. The Monte Carlo code is a platform onto which mor...
Implementation of 3D Lattice Monte Carlo Simulation on a Cluster of Symmetric Multiprocessors
Institute of Scientific and Technical Information of China (English)
雷咏梅; 蒋英; 等
2002-01-01
This paper presents a new approach to parallelize 3D lattice Monte Carlo algorithms used in the numerical simulation of polymer on ZiQiang 2000-a cluster of symmetric multiprocessors(SMPs).The combined load for cell and energy calculations over the time step is balanced together to form a single spatial decomposition.Basic aspects and strategies of running Monte Carlo calculations on parallel computers are studied.Different steps involved in porting the software on a parallel architecture based on ZiQiang 2000 running under Linux and MPI are described briefly.It is found that parallelization becomes more advantageous when either the lattice is very large or the model contains many cells and chains.
Adaptive Multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
Navarro, C A; Deng, Youjin
2015-01-01
The study of disordered spin systems through Monte Carlo simulations has proven to be a hard task due to the adverse energy landscape present at the low temperature regime, making it difficult for the simulation to escape from a local minimum. Replica based algorithms such as the Exchange Monte Carlo (also known as parallel tempering) are effective at overcoming this problem, reaching equilibrium on disordered spin systems such as the Spin Glass or Random Field models, by exchanging information between replicas of neighbor temperatures. In this work we present a multi-GPU Exchange Monte Carlo method designed for the simulation of the 3D Random Field Model. The implementation is based on a two-level parallelization scheme that allows the method to scale its performance in the presence of faster and GPUs as well as multiple GPUs. In addition, we modified the original algorithm by adapting the set of temperatures according to the exchange rate observed from short trial runs, leading to an increased exchange rate...
TART97 a coupled neutron-photon 3-D, combinatorial geometry Monte Carlo transport code
Energy Technology Data Exchange (ETDEWEB)
Cullen, D.E.
1997-11-22
TART97 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo transport code. This code can on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART97 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART97 is distributed on CD. This CD contains on- line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART97 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART97 and its data riles.
The impact of Monte Carlo simulation: a scientometric analysis of scholarly literature
Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V
2010-01-01
A scientometric analysis of Monte Carlo simulation and Monte Carlo codes has been performed over a set of representative scholarly journals related to radiation physics. The results of this study are reported and discussed. They document and quantitatively appraise the role of Monte Carlo methods and codes in scientific research and engineering applications.
IMPROVEMENT OF 3D MONTE CARLO LOCALIZATION USING A DEPTH CAMERA AND TERRESTRIAL LASER SCANNER
Directory of Open Access Journals (Sweden)
S. Kanai
2015-05-01
Full Text Available Effective and accurate localization method in three-dimensional indoor environments is a key requirement for indoor navigation and lifelong robotic assistance. So far, Monte Carlo Localization (MCL has given one of the promising solutions for the indoor localization methods. Previous work of MCL has been mostly limited to 2D motion estimation in a planar map, and a few 3D MCL approaches have been recently proposed. However, their localization accuracy and efficiency still remain at an unsatisfactory level (a few hundreds millimetre error at up to a few FPS or is not fully verified with the precise ground truth. Therefore, the purpose of this study is to improve an accuracy and efficiency of 6DOF motion estimation in 3D MCL for indoor localization. Firstly, a terrestrial laser scanner is used for creating a precise 3D mesh model as an environment map, and a professional-level depth camera is installed as an outer sensor. GPU scene simulation is also introduced to upgrade the speed of prediction phase in MCL. Moreover, for further improvement, GPGPU programming is implemented to realize further speed up of the likelihood estimation phase, and anisotropic particle propagation is introduced into MCL based on the observations from an inertia sensor. Improvements in the localization accuracy and efficiency are verified by the comparison with a previous MCL method. As a result, it was confirmed that GPGPU-based algorithm was effective in increasing the computational efficiency to 10-50 FPS when the number of particles remain below a few hundreds. On the other hand, inertia sensor-based algorithm reduced the localization error to a median of 47mm even with less number of particles. The results showed that our proposed 3D MCL method outperforms the previous one in accuracy and efficiency.
Monte Carlo study of a 3D Compton imaging device with GEANT4
Lenti, M; 10.1016/j.nima.2011.06.060
2011-01-01
In this paper we investigate, with a detailed Monte-Carlo simulation based on Geant4, the novel approach [Nucl. Instrum. Methods A588 (2008) 457] to 3D imaging with photon scattering. A monochromatic and well collimated gamma beam is used to illuminate the object to be imaged and the photons Compton scattered are detected by means of a surrounding germanium strip detector. The impact position and the energy of the photons are measured with high precision and the scattering position along the beam axis is calculated. We study as an application of this technique the case of brain imaging but the results can be applied as well to situations where a lighter object, with localized variations of density, is embedded in a denser container. We report here the attainable sensitivity in the detection of density variations as a function of the beam energy, the depth inside the object and size and density of the inclusions. Using a 600 keV gamma beam, for an inclusion with a density increase of 30% with respect to the so...
Monte Carlo study of a 3D Compton imaging device with GEANT4
Energy Technology Data Exchange (ETDEWEB)
Lenti, M., E-mail: lenti@fi.infn.it [Sezione dell' INFN di Firenze, via G. Sansone 1, I-50019 Sesto F. (Italy); Veltri, M., E-mail: michele.veltri@uniurb.it [Sezione dell' INFN di Firenze, via G. Sansone 1, I-50019 Sesto F. (Italy); Dipartimento di Matematica, Fisica e Informatica, Universita di Urbino, via S. Chiara 27, I-61029 Urbino (Italy)
2011-10-21
In this paper we investigate, with a detailed Monte Carlo simulation based on Geant4, the novel approach of Lenti (2008) to 3D imaging with photon scattering. A monochromatic and well collimated gamma beam is used to illuminate the object to be imaged and the photons Compton scattered are detected by means of a surrounding germanium strip detector. The impact position and the energy of the photons are measured with high precision and the scattering position along the beam axis is calculated. We study as an application of this technique the case of brain imaging but the results can be applied as well to situations where a lighter object, with localized variations of density, is embedded in a denser container. We report here the attainable sensitivity in the detection of density variations as a function of the beam energy, the depth inside the object and size and density of the inclusions. Using a 600 keV gamma beam, for an inclusion with a density increase of 30% with respect to the surrounding tissue and thickness along the beam of 5 mm, we obtain at midbrain position a resolution of about 2 mm and a contrast of 12%. In addition the simulation indicates that for the same gamma beam energy a complete brain scan would result in an effective dose of about 1 mSv.
Monte Carlo studies of 3d N=6 SCFT via localization method
Honda, Masazumi; Honma, Yoshinori; Nishimura, Jun; Shiba, Shotaro; Yoshida, Yutaka
2012-01-01
We perform Monte Carlo study of the 3d N=6 superconformal U(N)*U(N) Chern-Simons gauge theory (ABJM theory), which is conjectured to be dual to M-theory or type IIA superstring theory on certain AdS backgrounds. Our approach is based on a localization method, which reduces the problem to the simulation of a simple matrix model. This enables us to circumvent the difficulties in the original theory such as the sign problem and the SUSY breaking on a lattice. The new approach opens up the possibility of probing the quantum aspects of M-theory and testing the AdS_4/CFT_3 duality at the quantum level. Here we calculate the free energy, and confirm the N^{3/2} scaling in the M-theory limit predicted from the gravity side. We also find that our results nicely interpolate the analytical formulae proposed previously in the M-theory and type IIA regimes.
Adaptive multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model
Navarro, Cristóbal A.; Huang, Wei; Deng, Youjin
2016-08-01
This work presents an adaptive multi-GPU Exchange Monte Carlo approach for the simulation of the 3D Random Field Ising Model (RFIM). The design is based on a two-level parallelization. The first level, spin-level parallelism, maps the parallel computation as optimal 3D thread-blocks that simulate blocks of spins in shared memory with minimal halo surface, assuming a constant block volume. The second level, replica-level parallelism, uses multi-GPU computation to handle the simulation of an ensemble of replicas. CUDA's concurrent kernel execution feature is used in order to fill the occupancy of each GPU with many replicas, providing a performance boost that is more notorious at the smallest values of L. In addition to the two-level parallel design, the work proposes an adaptive multi-GPU approach that dynamically builds a proper temperature set free of exchange bottlenecks. The strategy is based on mid-point insertions at the temperature gaps where the exchange rate is most compromised. The extra work generated by the insertions is balanced across the GPUs independently of where the mid-point insertions were performed. Performance results show that spin-level performance is approximately two orders of magnitude faster than a single-core CPU version and one order of magnitude faster than a parallel multi-core CPU version running on 16-cores. Multi-GPU performance is highly convenient under a weak scaling setting, reaching up to 99 % efficiency as long as the number of GPUs and L increase together. The combination of the adaptive approach with the parallel multi-GPU design has extended our possibilities of simulation to sizes of L = 32 , 64 for a workstation with two GPUs. Sizes beyond L = 64 can eventually be studied using larger multi-GPU systems.
Eigenvalue analysis using a full-core Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Okafor, K.C.; Zino, J.F. (Westinghouse Savannah River Co., Aiken, SC (United States))
1992-01-01
The reactor physics codes used at the Savannah River Site (SRS) to predict reactor behavior have been continually benchmarked against experimental and operational data. A particular benchmark variable is the observed initial critical control rod position. Historically, there has been some difficulty predicting this position because of the difficulties inherent in using computer codes to model experimental or operational data. The Monte Carlo method is applied in this paper to study the initial critical control rod positions for the SRS K Reactor. A three-dimensional, full-core MCNP model of the reactor was developed for this analysis.
基于蒙特卡洛法的3D打印机定位精度分析%Analysis of Positioning Accuracy in 3D Printer Based on Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
袁茂强; 王永强; 王力; 赵维刚; 郭立杰
2016-01-01
In order to study the positioning accuracy of the header of three dimensional ( 3D) printer with respect to the hot bed in the influence of all errors, a Monte Carlo method was used to evaluate the positioning accuracy. The topological construction model of 3D printer was built according to the multi-body system theory. Based on the instrument precision theory, an integrated error propaga-tion model of the 3D printer was built to analyze the comprehensive effect of the various error sources, the error vector in x, y and z di-rections during the step motor moving was analyzed. The positioning accuracy of the header with respect to the hot bed was calculated with simulation by Monte Carlo method. The results show that within the believed 95% probability, the position outputs and the expand-ed uncertainty in x, y and z directions of the printer head with respect to the hot bed were calculated:x=(70±0. 099) mm, y=(50± 0. 100) mm, z=(20±0. 518) mm. This method can be used to analyze the main error source which influenced the positioning accura-cy, and compensated it so that improving the positioning accuracy of 3D printer. This method can also be used in the precision machine design, to specify subsystem precision requirements to achieve an overall balance in the levels of accuracy index of multi-body rigid structure.%为了研究在各种误差源综合作用的条件下, FDM桌面3D打印机挤出机头相对于热床的定位精度情况,通过蒙特卡洛法对其定位精度进行评估。根据多刚体系统理论,建立3D打印机的拓扑结构模型；基于仪器精度理论,对误差向量进行综合分析,分析步进电机在运动过程中,各种误差源在x, y, z方向引入的误差向量,建立3D打印机的误差传递模型；通过蒙特卡洛法对在各种误差源综合作用的情况下打印机挤出机头的定位精度进行仿真计算。结果表明：在置信概率95%情况下,挤出机头相对于热床在x, y, z 3个方向上的
Stratified source-sampling techniques for Monte Carlo eigenvalue analysis.
Energy Technology Data Exchange (ETDEWEB)
Mohamed, A.
1998-07-10
In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results.
DEFF Research Database (Denmark)
Caroli, E.; De Cesare, G.; Curado da Silva, R. M.
2015-01-01
. The detector with 3D spatial resolution is based on a CZT spectrometer in a highly segmented configuration designed to operate simultaneously as a high performance scattering polarimeter. Herein, we report results of a Monte Carlo study devoted to optimize the configuration of the detector for polarimetry...... with particular focus on event selection filter able to increase the polarimetric performance. This preliminary analysis shows that a procedure to optimize the polarization response of a 3D spectrometers should first of all determine the best tradeoff between the statistical significance and the quality...
Criticality accident detector coverage analysis using the Monte Carlo Method
Energy Technology Data Exchange (ETDEWEB)
Zino, J.F.; Okafor, K.C.
1993-12-31
As a result of the need for a more accurate computational methodology, the Los Alamos developed Monte Carlo code MCNP is used to show the implementation of a more advanced and accurate methodology in criticality accident detector analysis. This paper will detail the application of MCNP for the analysis of the areas of coverage of a criticality accident alarm detector located inside a concrete storage vault at the Savannah River Site. The paper will discuss; (1) the generation of fixed-source representations of various criticality fission sources (for spherical geometries); (2) the normalization of these sources to the ``minimum criticality of concern`` as defined by ANS 8.3; (3) the optimization process used to determine which source produces the lowest total detector response for a given set of conditions; and (4) the use of this minimum source for the analysis of the areas of coverage of the criticality accident alarm detector.
Critical Exponents of the Classical 3D Heisenberg Model A Single-Cluster Monte Carlo Study
Holm, C; Holm, Christian; Janke, Wolfhard
1993-01-01
We have simulated the three-dimensional Heisenberg model on simple cubic lattices, using the single-cluster Monte Carlo update algorithm. The expected pronounced reduction of critical slowing down at the phase transition is verified. This allows simulations on significantly larger lattices than in previous studies and consequently a better control over systematic errors. In one set of simulations we employ the usual finite-size scaling methods to compute the critical exponents $\
Efficient 3D Kinetic Monte Carlo Method for Modeling of Molecular Structure and Dynamics
DEFF Research Database (Denmark)
Panshenskov, Mikhail; Solov'yov, Ilia; Solov'yov, Andrey V.
2014-01-01
Self-assembly of molecular systems is an important and general problem that intertwines physics, chemistry, biology, and material sciences. Through understanding of the physical principles of self-organization, it often becomes feasible to control the process and to obtain complex structures with...... the kinetic Monte Carlo approach in a three-dimensional space. We describe the computational side of the developed code, discuss its efficiency, and apply it for studying an exemplary system....
Applicability of 3D Monte Carlo simulations for local values calculations in a PWR core
Bernard, Franck; Cochet, Bertrand; Jinaphanh, Alexis; Jacquet, Olivier
2014-06-01
As technical support of the French Nuclear Safety Authority, IRSN has been developing the MORET Monte Carlo code for many years in the framework of criticality safety assessment and is now working to extend its application to reactor physics. For that purpose, beside the validation for criticality safety (more than 2000 benchmarks from the ICSBEP Handbook have been modeled and analyzed), a complementary validation phase for reactor physics has been started, with benchmarks from IRPHEP Handbook and others. In particular, to evaluate the applicability of MORET and other Monte Carlo codes for local flux or power density calculations in large power reactors, it has been decided to contribute to the "Monte Carlo Performance Benchmark" (hosted by OECD/NEA). The aim of this benchmark is to monitor, in forthcoming decades, the performance progress of detailed Monte Carlo full core calculations. More precisely, it measures their advancement towards achieving high statistical accuracy in reasonable computation time for local power at fuel pellet level. A full PWR reactor core is modeled to compute local power densities for more than 6 million fuel regions. This paper presents results obtained at IRSN for this benchmark with MORET and comparisons with MCNP. The number of fuel elements is so large that source convergence as well as statistical convergence issues could cause large errors in local tallies, especially in peripheral zones. Various sampling or tracking methods have been implemented in MORET, and their operational effects on such a complex case have been studied. Beyond convergence issues, to compute local values in so many fuel regions could cause prohibitive slowing down of neutron tracking. To avoid this, energy grid unification and tallies preparation before tracking have been implemented, tested and proved to be successful. In this particular case, IRSN obtained promising results with MORET compared to MCNP, in terms of local power densities, standard
Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations
DEFF Research Database (Denmark)
Kamran, Faisal; Andersen, Peter E.
2015-01-01
Oblique incidence reflectometry has developed into an effective, noncontact, and noninvasive measurement technology for the quantification of both the reduced scattering and absorption coefficients of a sample. The optical properties are deduced by analyzing only the shape of the reflectance...... profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...... properties in which system demands vary to be able to detect subtle changes in the structure of the medium, translated as measured optical properties. Effects of variation in anisotropy are discussed and results presented. Finally, experimental data of milk products with different fat content are considered...
Implementation and analysis of an adaptive multilevel Monte Carlo algorithm
Hoel, Hakon
2014-01-01
We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014.
Monte Carlo analysis of radiative transport in oceanographic lidar measurements
Energy Technology Data Exchange (ETDEWEB)
Cupini, E.; Ferro, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy); Ferrari, N. [Bologna Univ., Bologna (Italy). Dipt. Ingegneria Energetica, Nucleare e del Controllo Ambientale
2001-07-01
The analysis of oceanographic lidar systems measurements is often carried out with semi-empirical methods, since there is only a rough understanding of the effects of many environmental variables. The development of techniques for interpreting the accuracy of lidar measurements is needed to evaluate the effects of various environmental situations, as well as of different experimental geometric configurations and boundary conditions. A Monte Carlo simulation model represents a tool that is particularly well suited for answering these important questions. The PREMAR-2F Monte Carlo code has been developed taking into account the main molecular and non-molecular components of the marine environment. The laser radiation interaction processes of diffusion, re-emission, refraction and absorption are treated. In particular are considered: the Rayleigh elastic scattering, produced by atoms and molecules with small dimensions with respect to the laser emission wavelength (i.e. water molecules), the Mie elastic scattering, arising from atoms or molecules with dimensions comparable to the laser wavelength (hydrosols), the Raman inelastic scattering, typical of water, the absorption of water, inorganic (sediments) and organic (phytoplankton and CDOM) hydrosols, the fluorescence re-emission of chlorophyll and yellow substances. PREMAR-2F is an extension of a code for the simulation of the radiative transport in atmospheric environments (PREMAR-2). The approach followed in PREMAR-2 was to combine conventional Monte Carlo techniques with analytical estimates of the probability of the receiver to have a contribution from photons coming back after an interaction in the field of view of the lidar fluorosensor collecting apparatus. This offers an effective mean for modelling a lidar system with realistic geometric constraints. The retrieved semianalytic Monte Carlo radiative transfer model has been developed in the frame of the Italian Research Program for Antarctica (PNRA) and it is
The Monte Carlo Simulation Method for System Reliability and Risk Analysis
Zio, Enrico
2013-01-01
Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling. Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques. This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...
Energy Technology Data Exchange (ETDEWEB)
Cullen, D E
1998-11-22
TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.
Langmore, Ian; Davis, Anthony B.; Bal, Guillaume; Marzouk, Youssef M.
2012-01-01
We describe a method for accelerating a 3D Monte Carlo forward radiative transfer model to the point where it can be used in a new kind of Bayesian retrieval framework. The remote sensing challenge is to detect and quantify a chemical effluent of a known absorbing gas produced by an industrial facility in a deep valley. The available data is a single low resolution noisy image of the scene in the near IR at an absorbing wavelength for the gas of interest. The detected sunlight has been multiply reflected by the variable terrain and/or scattered by an aerosol that is assumed partially known and partially unknown. We thus introduce a new class of remote sensing algorithms best described as "multi-pixel" techniques that call necessarily for a 3D radaitive transfer model (but demonstrated here in 2D); they can be added to conventional ones that exploit typically multi- or hyper-spectral data, sometimes with multi-angle capability, with or without information about polarization. The novel Bayesian inference methodology uses adaptively, with efficiency in mind, the fact that a Monte Carlo forward model has a known and controllable uncertainty depending on the number of sun-to-detector paths used.
Development of a randomized 3D cell model for Monte Carlo microdosimetry simulations
Energy Technology Data Exchange (ETDEWEB)
Douglass, Michael; Bezak, Eva; Penfold, Scott [School of Chemistry and Physics, University of Adelaide, North Terrace, Adelaide 5005, South Australia (Australia) and Department of Medical Physics, Royal Adelaide Hospital, North Terrace, Adelaide 5000, South Australia (Australia)
2012-06-15
Purpose: The objective of the current work was to develop an algorithm for growing a macroscopic tumor volume from individual randomized quasi-realistic cells. The major physical and chemical components of the cell need to be modeled. It is intended to import the tumor volume into GEANT4 (and potentially other Monte Carlo packages) to simulate ionization events within the cell regions. Methods: A MATLAB Copyright-Sign code was developed to produce a tumor coordinate system consisting of individual ellipsoidal cells randomized in their spatial coordinates, sizes, and rotations. An eigenvalue method using a mathematical equation to represent individual cells was used to detect overlapping cells. GEANT4 code was then developed to import the coordinate system into GEANT4 and populate it with individual cells of varying sizes and composed of the membrane, cytoplasm, reticulum, nucleus, and nucleolus. Each region is composed of chemically realistic materials. Results: The in-house developed MATLAB Copyright-Sign code was able to grow semi-realistic cell distributions ({approx}2 Multiplication-Sign 10{sup 8} cells in 1 cm{sup 3}) in under 36 h. The cell distribution can be used in any number of Monte Carlo particle tracking toolkits including GEANT4, which has been demonstrated in this work. Conclusions: Using the cell distribution and GEANT4, the authors were able to simulate ionization events in the individual cell components resulting from 80 keV gamma radiation (the code is applicable to other particles and a wide range of energies). This virtual microdosimetry tool will allow for a more complete picture of cell damage to be developed.
Accuracy Analysis of Assembly Success Rate with Monte Carlo Simulations
Institute of Scientific and Technical Information of China (English)
仲昕; 杨汝清; 周兵
2003-01-01
Monte Carlo simulation was applied to Assembly Success Rate (ASR) analyses.ASR of two peg-in-hole robot assemblies was used as an example by taking component parts' sizes,manufacturing tolerances and robot repeatability into account.A statistic arithmetic expression was proposed and deduced in this paper,which offers an alternative method of estimating the accuracy of ASR,without having to repeat the simulations.This statistic method also helps to choose a suitable sample size,if error reduction is desired.Monte Carlo simulation results demonstrated the feasibility of the method.
Monte Carlo Alpha Iteration Algorithm for a Subcritical System Analysis
Directory of Open Access Journals (Sweden)
Hyung Jin Shim
2015-01-01
Full Text Available The α-k iteration method which searches the fundamental mode alpha-eigenvalue via iterative updates of the fission source distribution has been successfully used for the Monte Carlo (MC alpha-static calculations of supercritical systems. However, the α-k iteration method for the deep subcritical system analysis suffers from a gigantic number of neutron generations or a huge neutron weight, which leads to an abnormal termination of the MC calculations. In order to stably estimate the prompt neutron decay constant (α of prompt subcritical systems regardless of subcriticality, we propose a new MC alpha-static calculation method named as the α iteration algorithm. The new method is derived by directly applying the power method for the α-mode eigenvalue equation and its calculation stability is achieved by controlling the number of time source neutrons which are generated in proportion to α divided by neutron speed in MC neutron transport simulations. The effectiveness of the α iteration algorithm is demonstrated for two-group homogeneous problems with varying the subcriticality by comparisons with analytic solutions. The applicability of the proposed method is evaluated for an experimental benchmark of the thorium-loaded accelerator-driven system.
Monte Carlo Simulations for Likelihood Analysis of the PEN experiment
Glaser, Charles; PEN Collaboration
2017-01-01
The PEN collaboration performed a precision measurement of the π+ ->e+νe(γ) branching ratio with the goal of obtaining a relative uncertainty of 5 ×10-4 or better at the Paul Scherrer Institute. A precision measurement of the branching ratio Γ(π -> e ν (γ)) / Γ(π -> μ ν (γ)) can be used to give mass bounds on ``new'', or non V -A, particles and interactions. This ratio also proves to be one of the most sensitive tests for lepton universality. The PEN detector consists of beam counters, an active target, a mini-time projection chamber, multi-wire proportional chamber, a plastic scintillating hodoscope, and a CsI electromagnetic calorimeter. The Geant4 Monte Carlo simulation is used to construct ultra-realistic events by digitizing energies and times, creating synthetic target waveforms, and fully accounting for photo-electron statistics. We focus on the detailed detector response to specific decay and background processes in order to sharpen the discrimination between them in the data analysis. Work supported by NSF grants PHY-0970013, 1307328, and others.
Monte-Carlo Application for Nondestructive Nuclear Waste Analysis
Carasco, C.; Engels, R.; Frank, M.; Furletov, S.; Furletova, J.; Genreith, C.; Havenith, A.; Kemmerling, G.; Kettler, J.; Krings, T.; Ma, J.-L.; Mauerhofer, E.; Neike, D.; Payan, E.; Perot, B.; Rossbach, M.; Schitthelm, O.; Schumann, M.; Vasquez, R.
2014-06-01
Radioactive waste has to undergo a process of quality checking in order to check its conformance with national regulations prior to its transport, intermediate storage and final disposal. Within the quality checking of radioactive waste packages non-destructive assays are required to characterize their radio-toxic and chemo-toxic contents. The Institute of Energy and Climate Research - Nuclear Waste Management and Reactor Safety of the Forschungszentrum Jülich develops in the framework of cooperation nondestructive analytical techniques for the routine characterization of radioactive waste packages at industrial-scale. During the phase of research and development Monte Carlo techniques are used to simulate the transport of particle, especially photons, electrons and neutrons, through matter and to obtain the response of detection systems. The radiological characterization of low and intermediate level radioactive waste drums is performed by segmented γ-scanning (SGS). To precisely and accurately reconstruct the isotope specific activity content in waste drums by SGS measurement, an innovative method called SGSreco was developed. The Geant4 code was used to simulate the response of the collimated detection system for waste drums with different activity and matrix configurations. These simulations allow a far more detailed optimization, validation and benchmark of SGSreco, since the construction of test drums covering a broad range of activity and matrix properties is time consuming and cost intensive. The MEDINA (Multi Element Detection based on Instrumental Neutron Activation) test facility was developed to identify and quantify non-radioactive elements and substances in radioactive waste drums. MEDINA is based on prompt and delayed gamma neutron activation analysis (P&DGNAA) using a 14 MeV neutron generator. MCNP simulations were carried out to study the response of the MEDINA facility in terms of gamma spectra, time dependence of the neutron energy spectrum
Monte Carlo Study of Topological Defects in the 3D Heisenberg Model
Holm, C; Holm, Christian; Janke, Wolfhard
1994-01-01
We use single-cluster Monte Carlo simulations to study the role of topological defects in the three-dimensional classical Heisenberg model on simple cubic lattices of size up to $80^3$. By applying reweighting techniques to time series generated in the vicinity of the approximate infinite volume transition point $K_c$, we obtain clear evidence that the temperature derivative of the average defect density $d\\langle n \\rangle/dT$ behaves qualitatively like the specific heat, i.e., both observables are finite in the infinite volume limit. This is in contrast to results by Lau and Dasgupta [{\\em Phys. Rev.\\/} {\\bf B39} (1989) 7212] who extrapolated a divergent behavior of $d\\langle n \\rangle/dT$ at $K_c$ from simulations on lattices of size up to $16^3$. We obtain weak evidence that $d\\langle n \\rangle/dT$ scales with the same critical exponent as the specific heat.As a byproduct of our simulations, we obtain a very accurate estimate for the ratio $\\alpha/\
Algebraic Monte Carlo precedure reduces statistical analysis time and cost factors
Africano, R. C.; Logsdon, T. S.
1967-01-01
Algebraic Monte Carlo procedure statistically analyzes performance parameters in large, complex systems. The individual effects of input variables can be isolated and individual input statistics can be changed without having to repeat the entire analysis.
Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation
Energy Technology Data Exchange (ETDEWEB)
Pecchia, M.; D' Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)
pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis
White, J.; Brakefield, L. K.
2015-12-01
The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.
Improved analysis of bias in Monte Carlo criticality safety
Haley, Thomas C.
2000-08-01
Criticality safety, the prevention of nuclear chain reactions, depends on Monte Carlo computer codes for most commercial applications. One major shortcoming of these codes is the limited accuracy of the atomic and nuclear data files they depend on. In order to apply a code and its data files to a given criticality safety problem, the code must first be benchmarked against similar problems for which the answer is known. The difference between a code prediction and the known solution is termed the "bias" of the code. Traditional calculations of the bias for application to commercial criticality problems are generally full of assumptions and lead to large uncertainties which must be conservatively factored into the bias as statistical tolerances. Recent trends in storing commercial nuclear fuel---narrowed regulatory margins of safety, degradation of neutron absorbers, the desire to use higher enrichment fuel, etc.---push the envelope of criticality safety. They make it desirable to minimize uncertainty in the bias to accommodate these changes, and they make it vital to understand what assumptions are safe to make under what conditions. A set of improved procedures is proposed for (1) developing multivariate regression bias models, and (2) applying multivariate regression bias models. These improved procedures lead to more accurate estimates of the bias and much smaller uncertainties about this estimate, while also generally providing more conservative results. The drawback is that the procedures are not trivial and are highly labor intensive to implement. The payback in savings in margin to criticality and conservatism for calculations near regulatory and safety limits may be worth this cost. To develop these procedures, a bias model using the statistical technique of weighted least squares multivariate regression is developed in detail. Problems that can occur from a weak statistical analysis are highlighted, and a solid statistical method for developing the bias
Gauge Potts model with generalized action: A Monte Carlo analysis
Energy Technology Data Exchange (ETDEWEB)
Fanchiotti, H.; Canal, C.A.G.; Sciutto, S.J.
1985-08-15
Results of a Monte Carlo calculation on the q-state gauge Potts model in d dimensions with a generalized action involving planar 1 x 1, plaquette, and 2 x 1, fenetre, loop interactions are reported. For d = 3 and q = 2, first- and second-order phase transitions are detected. The phase diagram for q = 3 presents only first-order phase transitions. For d = 2, a comparison with analytical results is made. Here also, the behavior of the numerical simulation in the vicinity of a second-order transition is analyzed.
Monte Carlo Criticality Methods and Analysis Capabilities in SCALE
Energy Technology Data Exchange (ETDEWEB)
Goluoglu, Sedat [ORNL; Petrie Jr, Lester M [ORNL; Dunn, Michael E [ORNL; Hollenbach, Daniel F [ORNL; Rearden, Bradley T [ORNL
2011-01-01
This paper describes the Monte Carlo codes KENO V.a and KENO-VI in SCALE that are primarily used to calculate multiplication factors and flux distributions of fissile systems. Both codes allow explicit geometric representation of the target systems and are used internationally for safety analyses involving fissile materials. KENO V.a has limiting geometric rules such as no intersections and no rotations. These limitations make KENO V.a execute very efficiently and run very fast. On the other hand, KENO-VI allows very complex geometric modeling. Both KENO codes can utilize either continuous-energy or multigroup cross-section data and have been thoroughly verified and validated with ENDF libraries through ENDF/B-VII.0, which has been first distributed with SCALE 6. Development of the Monte Carlo solution technique and solution methodology as applied in both KENO codes is explained in this paper. Available options and proper application of the options and techniques are also discussed. Finally, performance of the codes is demonstrated using published benchmark problems.
An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for BHTR Analysis
Energy Technology Data Exchange (ETDEWEB)
William R. Martin; John C. Lee
2009-12-30
Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.
Energy Technology Data Exchange (ETDEWEB)
Aleshin, Sergey S.; Gorodkov, Sergey S.; Shcherenko, Anna I. [National Research Centre ' Kurchatov Institute' , Moscow (Russian Federation)
2016-09-15
A burn-up calculation of large systems by Monte-Carlo code (MCU) is complex process and it requires large computational costs. Previously prepared isotopic compositions are proposed to be used for the Monte-Carlo code calculations of different system states with burnt fuel. Isotopic compositions are calculated by an approximation method. The approximation method is based on usage of a spectral functionality and reference isotopic compositions, that are calculated by the engineering codes (TVS-M, BIPR-7A and PERMAK-A). The multiplication factors and power distributions of FAs from a 3-D reactor core are calculated in this work by the Monte-Carlo code MCU using earlier prepared isotopic compositions. The separate conditions of the burnt core are observed. The results of MCU calculations were compared with those that were obtained by engineering codes.
Scaling/LER study of Si GAA nanowire FET using 3D finite element Monte Carlo simulations
Elmessary, Muhammad A.; Nagy, Daniel; Aldegunde, Manuel; Seoane, Natalia; Indalecio, Guillermo; Lindberg, Jari; Dettmer, Wulf; Perić, Djordje; García-Loureiro, Antonio J.; Kalna, Karol
2017-02-01
3D Finite Element (FE) Monte Carlo (MC) simulation toolbox incorporating 2D Schrödinger equation quantum corrections is employed to simulate ID-VG characteristics of a 22 nm gate length gate-all-around (GAA) Si nanowire (NW) FET demonstrating an excellent agreement against experimental data at both low and high drain biases. We then scale the Si GAA NW according to the ITRS specifications to a gate length of 10 nm predicting that the NW FET will deliver the required on-current of above 1 mA/ μ m and a superior electrostatic integrity with a nearly ideal sub-threshold slope of 68 mV/dec and a DIBL of 39 mV/V. In addition, we use a calibrated 3D FE quantum corrected drift-diffusion (DD) toolbox to investigate the effects of NW line-edge roughness (LER) induced variability on the sub-threshold characteristics (threshold voltage (VT), OFF-current (IOFF), sub-threshold slope (SS) and drain-induced-barrier-lowering (DIBL)) for the 22 nm and 10 nm gate length GAA NW FETs at low and high drain biases. We simulate variability with two LER correlation lengths (CL = 20 nm and 10 nm) and three root mean square values (RMS = 0.6, 0.7 and 0.85 nm).
Kudrolli, Haris A.
2001-04-01
A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates
Niccolini, G.; Alcolea, J.
Solving the radiative transfer problem is a common problematic to may fields in astrophysics. With the increasing angular resolution of spatial or ground-based telescopes (VLTI, HST) but also with the next decade instruments (NGST, ALMA, ...), astrophysical objects reveal and will certainly reveal complex spatial structures. Consequently, it is necessary to develop numerical tools being able to solve the radiative transfer equation in three dimensions in order to model and interpret these observations. I present a 3D radiative transfer program, using a new method for the construction of an adaptive spatial grid, based on the Monte Claro method. With the help of this tools, one can solve the continuum radiative transfer problem (e.g. a dusty medium), computes the temperature structure of the considered medium and obtain the flux of the object (SED and images).
Energy Technology Data Exchange (ETDEWEB)
Zhang, Y; Yang, J; Liu, H [Cangzhou People' s Hospital, Cangzhou, Hebei (China); Liu, D [The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei (China)
2014-06-01
Purpose: The purpose of this work is to compare the verification results of three solutions (2D/3D ionization chamber arrays measurement and Monte Carlo simulation), the results will help make a clinical decision as how to do our cervical IMRT verification. Methods: Seven cervical cases were planned with Pinnacle 8.0m to meet the clinical acceptance criteria. The plans were recalculated in the Matrixx and Delta4 phantom with the accurate plans parameters. The plans were also recalculated by Monte Carlo using leaf sequences and MUs for individual plans of every patient, Matrixx and Delta4 phantom. All plans of Matrixx and Delta4 phantom were delivered and measured. The dose distribution of iso slice, dose profiles, gamma maps of every beam were used to evaluate the agreement. Dose-volume histograms were also compared. Results: The dose distribution of iso slice and dose profiles from Pinnacle calculation were in agreement with the Monte Carlo simulation, Matrixx and Delta4 measurement. A 95.2%/91.3% gamma pass ratio was obtained between the Matrixx/Delta4 measurement and Pinnacle distributions within 3mm/3% gamma criteria. A 96.4%/95.6% gamma pass ratio was obtained between the Matrixx/Delta4 measurement and Monte Carlo simulation within 2mm/2% gamma criteria, almost 100% gamma pass ratio within 3mm/3% gamma criteria. The DVH plot have slightly differences between Pinnacle and Delta4 measurement as well as Pinnacle and Monte Carlo simulation, but have excellent agreement between Delta4 measurement and Monte Carlo simulation. Conclusion: It was shown that Matrixx/Delta4 and Monte Carlo simulation can be used very efficiently to verify cervical IMRT delivery. In terms of Gamma value the pass ratio of Matrixx was little higher, however, Delta4 showed more problem fields. The primary advantage of Delta4 is the fact it can measure true 3D dosimetry while Monte Carlo can simulate in patients CT images but not in phantom.
NEPHTIS: 2D/3D validation elements using MCNP4c and TRIPOLI4 Monte-Carlo codes
Energy Technology Data Exchange (ETDEWEB)
Courau, T.; Girardi, E. [EDF R and D/SINETICS, 1av du General de Gaulle, F92141 Clamart CEDEX (France); Damian, F.; Moiron-Groizard, M. [DEN/DM2S/SERMA/LCA, CEA Saclay, F91191 Gif-sur-Yvette CEDEX (France)
2006-07-01
High Temperature Reactors (HTRs) appear as a promising concept for the next generation of nuclear power applications. The CEA, in collaboration with AREVA-NP and EDF, is developing a core modeling tool dedicated to the prismatic block-type reactor. NEPHTIS (Neutronics Process for HTR Innovating System) is a deterministic codes system based on a standard two-steps Transport-Diffusion approach (APOLLO2/CRONOS2). Validation of such deterministic schemes usually relies on Monte-Carlo (MC) codes used as a reference. However, when dealing with large HTR cores the fission source stabilization is rather poor with MC codes. In spite of this, it is shown in this paper that MC simulations may be used as a reference for a wide range of configurations. The first part of the paper is devoted to 2D and 3D MC calculations of a HTR core with control devices. Comparisons between MCNP4c and TRIPOLI4 MC codes are performed and show very consistent results. Finally, the last part of the paper is devoted to the code to code validation of the NEPHTIS deterministic scheme. (authors)
Accuracy Analysis for 6-DOF PKM with Sobol Sequence Based Quasi Monte Carlo Method
Institute of Scientific and Technical Information of China (English)
Jianguang Li; Jian Ding; Lijie Guo; Yingxue Yao; Zhaohong Yi; Huaijing Jing; Honggen Fang
2015-01-01
To improve the precisions of pose error analysis for 6⁃dof parallel kinematic mechanism ( PKM) during assembly quality control, a Sobol sequence based on Quasi Monte Carlo ( QMC) method is introduced and implemented in pose accuracy analysis for the PKM in this paper. The Sobol sequence based on Quasi Monte Carlo with the regularity and uniformity of samples in high dimensions, can prevail traditional Monte Carlo method with up to 98�59% and 98�25% enhancement for computational precision of pose error statistics. Then a PKM tolerance design system integrating this method is developed and with it pose error distributions of the PKM within a prescribed workspace are finally obtained and analyzed.
A study of the earth radiation budget using a 3D Monte-Carlo radiative transer code
Okata, M.; Nakajima, T.; Sato, Y.; Inoue, T.; Donovan, D. P.
2013-12-01
The purpose of this study is to evaluate the earth's radiation budget when data are available from satellite-borne active sensors, i.e. cloud profiling radar (CPR) and lidar, and a multi-spectral imager (MSI) in the project of the Earth Explorer/EarthCARE mission. For this purpose, we first developed forward and backward 3D Monte Carlo radiative transfer codes that can treat a broadband solar flux calculation including thermal infrared emission calculation by k-distribution parameters of Sekiguchi and Nakajima (2008). In order to construct the 3D cloud field, we tried the following three methods: 1) stochastic cloud generated by randomized optical thickness each layer distribution and regularly-distributed tilted clouds, 2) numerical simulations by a non-hydrostatic model with bin cloud microphysics model and 3) Minimum cloud Information Deviation Profiling Method (MIDPM) as explained later. As for the method-2 (numerical modeling method), we employed numerical simulation results of Californian summer stratus clouds simulated by a non-hydrostatic atmospheric model with a bin-type cloud microphysics model based on the JMA NHM model (Iguchi et al., 2008; Sato et al., 2009, 2012) with horizontal (vertical) grid spacing of 100m (20m) and 300m (20m) in a domain of 30km (x), 30km (y), 1.5km (z) and with a horizontally periodic lateral boundary condition. Two different cell systems were simulated depending on the cloud condensation nuclei (CCN) concentration. In the case of horizontal resolution of 100m, regionally averaged cloud optical thickness, , and standard deviation of COT, were 3.0 and 4.3 for pristine case and 8.5 and 7.4 for polluted case, respectively. In the MIDPM method, we first construct a library of pair of observed vertical profiles from active sensors and collocated imager products at the nadir footprint, i.e. spectral imager radiances, cloud optical thickness (COT), effective particle radius (RE) and cloud top temperature (Tc). We then select a best
Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis
Energy Technology Data Exchange (ETDEWEB)
Wilson, Paul; Evans, Thomas; Tautges, Tim
2012-12-24
This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well
Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method
Institute of Scientific and Technical Information of China (English)
XI Jia-mi; YANG Geng-she
2008-01-01
Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.
Dumenci, Levent; Windle, Michael
2001-01-01
Used Monte Carlo methods to evaluate the adequacy of cluster analysis to recover group membership based on simulated latent growth curve (LCG) models. Cluster analysis failed to recover growth subtypes adequately when the difference between growth curves was shape only. Discusses circumstances under which it was more successful. (SLD)
Maucec, M.; Rigollet, C.
2004-01-01
The performance of a detection system based on the pulsed fast/thermal neutron analysis technique was assessed using Monte Carlo simulations. The aim was to develop and implement simulation methods, to support and advance the data analysis techniques of the characteristic gamma-ray spectra, potentia
A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis
Edwards, Michael C.
2010-01-01
Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…
Barbeiro, A. R.; Ureba, A.; Baeza, J. A.; Linares, R.; Perucha, M.; Jiménez-Ortega, E.; Velázquez, S.; Mateos, J. C.
2016-01-01
A model based on a specific phantom, called QuAArC, has been designed for the evaluation of planning and verification systems of complex radiotherapy treatments, such as volumetric modulated arc therapy (VMAT). This model uses the high accuracy provided by the Monte Carlo (MC) simulation of log files and allows the experimental feedback from the high spatial resolution of films hosted in QuAArC. This cylindrical phantom was specifically designed to host films rolled at different radial distances able to take into account the entrance fluence and the 3D dose distribution. Ionization chamber measurements are also included in the feedback process for absolute dose considerations. In this way, automated MC simulation of treatment log files is implemented to calculate the actual delivery geometries, while the monitor units are experimentally adjusted to reconstruct the dose-volume histogram (DVH) on the patient CT. Prostate and head and neck clinical cases, previously planned with Monaco and Pinnacle treatment planning systems and verified with two different commercial systems (Delta4 and COMPASS), were selected in order to test operational feasibility of the proposed model. The proper operation of the feedback procedure was proved through the achieved high agreement between reconstructed dose distributions and the film measurements (global gamma passing rates > 90% for the 2%/2 mm criteria). The necessary discretization level of the log file for dose calculation and the potential mismatching between calculated control points and detection grid in the verification process were discussed. Besides the effect of dose calculation accuracy of the analytic algorithm implemented in treatment planning systems for a dynamic technique, it was discussed the importance of the detection density level and its location in VMAT specific phantom to obtain a more reliable DVH in the patient CT. The proposed model also showed enough robustness and efficiency to be considered as a pre
Ainscow, E K; Brand, M D
1998-09-21
The errors associated with experimental application of metabolic control analysis are difficult to assess. In this paper, we give examples where Monte-Carlo simulations of published experimental data are used in error analysis. Data was simulated according to the mean and error obtained from experimental measurements and the simulated data was used to calculate control coefficients. Repeating the simulation 500 times allowed an estimate to be made of the error implicit in the calculated control coefficients. In the first example, state 4 respiration of isolated mitochondria, Monte-Carlo simulations based on the system elasticities were performed. The simulations gave error estimates similar to the values reported within the original paper and those derived from a sensitivity analysis of the elasticities. This demonstrated the validity of the method. In the second example, state 3 respiration of isolated mitochondria, Monte-Carlo simulations were based on measurements of intermediates and fluxes. A key feature of this simulation was that the distribution of the simulated control coefficients did not follow a normal distribution, despite simulation of the original data being based on normal distributions. Consequently, the error calculated using simulation was greater and more realistic than the error calculated directly by averaging the original results. The Monte-Carlo simulations are also demonstrated to be useful in experimental design. The individual data points that should be repeated in order to reduce the error in the control coefficients can be highlighted.
Energy Technology Data Exchange (ETDEWEB)
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-25
MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.
Energy Technology Data Exchange (ETDEWEB)
Palau, J.M. [CEA Cadarache, Service de Physique des Reacteurs et du Cycle, Lab. de Projets Nucleaires, 13 - Saint-Paul-lez-Durance (France)
2005-07-01
This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U{sup 235}, U{sup 238}, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)
Energy Technology Data Exchange (ETDEWEB)
Chan, Mark K.H. [Tuen Mun Hospital, Department of Clinical Oncology, Hong Kong (S.A.R) (China); Werner, Rene [The University Medical Center Hamburg-Eppendorf, Department of Computational Neuroscience, Hamburg (Germany); Ayadi, Miriam [Leon Berard Cancer Center, Department of Radiation Oncology, Lyon (France); Blanck, Oliver [University Clinic of Schleswig-Holstein, Department of Radiation Oncology, Luebeck (Germany); CyberKnife Center Northern Germany, Guestrow (Germany)
2014-09-20
To investigate the adequacy of three-dimensional (3D) Monte Carlo (MC) optimization (3DMCO) and the potential of four-dimensional (4D) dose renormalization (4DMC{sub renorm}) and optimization (4DMCO) for CyberKnife (Accuray Inc., Sunnyvale, CA) radiotherapy planning in lung cancer. For 20 lung tumors, 3DMCO and 4DMCO plans were generated with planning target volume (PTV{sub 5} {sub mm}) = gross tumor volume (GTV) plus 5 mm, assuming 3 mm for tracking errors (PTV{sub 3} {sub mm}) and 2 mm for residual organ deformations. Three fractions of 60 Gy were prescribed to ≥ 95 % of the PTV{sub 5} {sub mm}. Each 3DMCO plan was recalculated by 4D MC dose calculation (4DMC{sub recal}) to assess the dosimetric impact of organ deformations. The 4DMC{sub recal} plans were renormalized (4DMC{sub renorm}) to 95 % dose coverage of the PTV{sub 5} {sub mm} for comparisons with the 4DMCO plans. A 3DMCO plan was considered adequate if the 4DMC{sub recal} plan showed ≥ 95 % of the PTV{sub 3} {sub mm} receiving 60 Gy and doses to other organs at risk (OARs) were below the limits. In seven lesions, 3DMCO was inadequate, providing < 95 % dose coverage to the PTV{sub 3} {sub mm}. Comparison of 4DMC{sub recal} and 3DMCO plans showed that organ deformations resulted in lower OAR doses. Renormalizing the 4DMC{sub recal} plans could produce OAR doses higher than the tolerances in some 4DMC{sub renorm} plans. Dose conformity of the 4DMC{sub renorm} plans was inferior to that of the 3DMCO and 4DMCO plans. The 4DMCO plans did not always achieve OAR dose reductions compared to 3DMCO and 4DMC{sub renorm} plans. This study indicates that 3DMCO with 2 mm margins for organ deformations may be inadequate for Cyberknife-based lung stereotactic body radiotherapy (SBRT). Renormalizing the 4DMC{sub recal} plans could produce degraded dose conformity and increased OAR doses; 4DMCO can resolve this problem. (orig.) [German] Untersucht wurde die Angemessenheit einer dreidimensionalen (3-D) Monte-Carlo
Holm, C
1992-01-01
We report measurements of the critical exponents of the classical three-dimensional Heisenberg model on simple cubic lattices of size $L^3$ with $L$ = 12, 16, 20, 24, 32, 40, and 48. The data was obtained from a few long single-cluster Monte Carlo simulations near the phase transition. We compute high precision estimates of the critical coupling $K_c$, Binder's parameter $U^* and the critical exponents $\
Comparative Criticality Analysis of Two Monte Carlo Codes on Centrifugal Atomizer: MCNPS and SCALE
Energy Technology Data Exchange (ETDEWEB)
Kang, H-S; Jang, M-S; Kim, S-R [NESS, Daejeon (Korea, Republic of); Park, J-M; Kim, K-N [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2015-10-15
There are two well-known Monte Carlo codes for criticality analysis, MCNP5 and SCALE. MCNP5 is a general-purpose Monte Carlo N-Particle code that can be used for neutron, photon, electron or coupled neutron / photon / electron transport, including the capability to calculate eigenvalues for critical system as a main analysis code. SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. SCALE was conceived and funded by US NRC to perform standardized computer analysis for licensing evaluation and is used widely in the world. We performed a validation test of MCNP5 and a comparative analysis of Monte Carlo codes, MCNP5 and SCALE, in terms of the critical analysis of centrifugal atomizer. In the criticality analysis using MCNP5 code, we obtained the statistically reliable results by using a large number of source histories per cycle and performing of uncertainty analysis.
SAFETY ANALYSIS AND RISK ASSESSMENT FOR BRIDGES HEALTH MONITORING WITH MONTE CARLO METHODS
2016-01-01
With the increasing requirements of building safety in the past few decades, healthy monitoring and risk assessment of structures is of more and more importance. Especially since traffic loads are heavier, risk Assessment for bridges are essential. In this paper we take advantage of Monte Carlo Methods to analysis the safety of bridge and monitoring the destructive risk. One main goal of health monitoring is to reduce the risk of unexpected damage of artificial objects
Further analysis of multilevel Monte Carlo methods for elliptic PDEs with random coefficients
Teckentrup, A. L.; Scheichl, R.; Giles, M. B.; Ullmann, E
2012-01-01
We consider the application of multilevel Monte Carlo methods to elliptic PDEs with random coefficients. We focus on models of the random coefficient that lack uniform ellipticity and boundedness with respect to the random parameter, and that only have limited spatial regularity. We extend the finite element error analysis for this type of equation, carried out recently by Charrier, Scheichl and Teckentrup, to more difficult problems, posed on non--smooth domains and with discontinuities in t...
Directory of Open Access Journals (Sweden)
Xisheng Yu
2014-01-01
Full Text Available The paper by Liu (2010 introduces a method termed the canonical least-squares Monte Carlo (CLM which combines a martingale-constrained entropy model and a least-squares Monte Carlo algorithm to price American options. In this paper, we first provide the convergence results of CLM and numerically examine the convergence properties. Then, the comparative analysis is empirically conducted using a large sample of the S&P 100 Index (OEX puts and IBM puts. The results on the convergence show that choosing the shifted Legendre polynomials with four regressors is more appropriate considering the pricing accuracy and the computational cost. With this choice, CLM method is empirically demonstrated to be superior to the benchmark methods of binominal tree and finite difference with historical volatilities.
Stolarski, R. S.; Butler, D. M.; Rundel, R. D.
1977-01-01
A concise stratospheric model was used in a Monte-Carlo analysis of the propagation of reaction rate uncertainties through the calculation of an ozone perturbation due to the addition of chlorine. Two thousand Monte-Carlo cases were run with 55 reaction rates being varied. Excellent convergence was obtained in the output distributions because the model is sensitive to the uncertainties in only about 10 reactions. For a 1 ppby chlorine perturbation added to a 1.5 ppby chlorine background, the resultant 1 sigma uncertainty on the ozone perturbation is a factor of 1.69 on the high side and 1.80 on the low side. The corresponding 2 sigma factors are 2.86 and 3.23. Results are also given for the uncertainties, due to reaction rates, in the ambient concentrations of stratospheric species.
Monte Carlo Calculation for Landmine Detection using Prompt Gamma Neutron Activation Analysis
Energy Technology Data Exchange (ETDEWEB)
Park, Seungil; Kim, Seong Bong; Yoo, Suk Jae [Plasma Technology Research Center, Gunsan (Korea, Republic of); Shin, Sung Gyun; Cho, Moohyun [POSTECH, Pohang (Korea, Republic of); Han, Seunghoon; Lim, Byeongok [Samsung Thales, Yongin (Korea, Republic of)
2014-05-15
Identification and demining of landmines are a very important issue for the safety of the people and the economic development. To solve the issue, several methods have been proposed in the past. In Korea, National Fusion Research Institute (NFRI) is developing a landmine detector using prompt gamma neutron activation analysis (PGNAA) as a part of the complex sensor-based landmine detection system. In this paper, the Monte Carlo calculation results for this system are presented. Monte Carlo calculation was carried out for the design of the landmine detector using PGNAA. To consider the soil effect, average soil composition is analyzed and applied to the calculation. This results has been used to determine the specification of the landmine detector.
Monte Carlo Analysis as a Trajectory Design Driver for the TESS Mission
Nickel, Craig; Lebois, Ryan; Lutz, Stephen; Dichmann, Donald; Parker, Joel
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.
Nickel, Craig; Parker, Joel; Dichmann, Don; Lebois, Ryan; Lutz, Stephen
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will be injected into a highly eccentric Earth orbit and fly 3.5 phasing loops followed by a lunar flyby to enter a mission orbit with lunar 2:1 resonance. Through the phasing loops and mission orbit, the trajectory is significantly affected by lunar and solar gravity. We have developed a trajectory design to achieve the mission orbit and meet mission constraints, including eclipse avoidance and a 30-year geostationary orbit avoidance requirement. A parallelized Monte Carlo simulation was performed to validate the trajectory after injecting common perturbations, including launch dispersions, orbit determination errors, and maneuver execution errors. The Monte Carlo analysis helped identify mission risks and is used in the trajectory selection process.
Number of iterations needed in Monte Carlo Simulation using reliability analysis for tunnel supports
Directory of Open Access Journals (Sweden)
E. Bukaçi
2016-06-01
Full Text Available There are many methods in geotechnical engineering which could take advantage of Monte Carlo Simulation to establish probability of failure, since closed form solutions are almost impossible to use in most cases. The problem that arises with using Monte Carlo Simulation is the number of iterations needed for a particular simulation.This article will show why it’s important to calculate number of iterations needed for Monte Carlo Simulation used in reliability analysis for tunnel supports using convergence – confinement method. Number if iterations needed will be calculated with two methods. In the first method, the analyst has to accept a distribution function for the performance function. The other method suggested by this article is to calculate number of iterations based on the convergence of the factor the analyst is interested in the calculation. Reliability analysis will be performed for the diversion tunnel in Rrëshen, Albania, by using both methods mentioned and results will be confronted
Stanica, Nicolae; Cimpoesu, Fanica; Radu, Cosmin; Chihaia, Viorel; Suh, Soong-Hyuck
2015-01-01
As for the systematic investigations of magnetic behaviors and its related properties, computer simulations in extended quantum spin networks have been performed in good conditions via the generalized Ising model using the Monte Carlo-Metropolis algorithm with proven efficiencies. The present work, starting from a real magnetic system, provides detailed insights into the finite size effects and the ferrimagnetic properties in various 1 D, 2D and 3D geometries such as the magnetic moment, ordering temperature, and magnetocaloric effects with the different values of spins localized on the different coordinated sites.
Mission Command Analysis Using Monte Carlo Tree Search
2013-06-14
Luther King Drive, White Sands Missile Range, NM 88002-5502, 28 September 2011. REF-1 ... lessons learned: • When implementing MCTS into partially observable games, we must be able to produce a fully-specified state based on available information...Ohman. COMBATXXI, defined. COMBAT XXI online documentation, Training and Doctrine Command Analysis Center—White Sands Missile Range (TRAC-WSMR), Martin
Elbast, M; Saudo, A; Franck, D; Petitot, F; Desbrée, A
2012-07-01
Microdosimetry using Monte Carlo simulation is a suitable technique to describe the stochastic nature of energy deposition by alpha particle at cellular level. Because of its short range, the energy imparted by this particle to the targets is highly non-uniform. Thus, to achieve accurate dosimetric results, the modelling of the geometry should be as realistic as possible. The objectives of the present study were to validate the use of the MCNPX and Geant4 Monte Carlo codes for microdosimetric studies using simple and three-dimensional voxelised geometry and to study their limit of validity in this last case. To that aim, the specific energy (z) deposited in the cell nucleus, the single-hit density of specific energy f(1)(z) and the mean-specific energy were calculated. Results show a good agreement when compared with the literature using simple geometry. The maximum percentage difference found is MCNPX for calculation time is 10 times higher with Geant4 than MCNPX code in the same conditions.
Energy Technology Data Exchange (ETDEWEB)
Fallahpoor, M; Abbasi, M [Tehran University of Medical Sciences, Vali-Asr Hospital, Tehran, Tehran (Iran, Islamic Republic of); Sen, A [University of Houston, Houston, TX (United States); Parach, A [Shahid Sadoughi University of Medical Sciences, Yazd, Yazd (Iran, Islamic Republic of); Kalantari, F [UT Southwestern Medical Center, Dallas, TX (United States)
2015-06-15
Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-T scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning
A Monte Carlo based spent fuel analysis safeguards strategy assessment
Energy Technology Data Exchange (ETDEWEB)
Fensin, Michael L [Los Alamos National Laboratory; Tobin, Stephen J [Los Alamos National Laboratory; Swinhoe, Martyn T [Los Alamos National Laboratory; Menlove, Howard O [Los Alamos National Laboratory; Sandoval, Nathan P [Los Alamos National Laboratory
2009-01-01
assessment process, the techniques employed to automate the coupled facets of the assessment process, and the standard burnup/enrichment/cooling time dependent spent fuel assembly library. We also clearly define the diversion scenarios that will be analyzed during the standardized assessments. Though this study is currently limited to generic PWR assemblies, it is expected that the results of the assessment will yield an adequate spent fuel analysis strategy knowledge that will help the down-select process for other reactor types.
Kim, Kyeong-Hyeon; Kim, Dong-Su; Kim, Tae-Ho; Kang, Seong-Hee; Cho, Min-Seok; Suh, Tae Suk
2015-11-01
The phantom-alignment error is one of the factors affecting delivery quality assurance (QA) accuracy in intensity-modulated radiation therapy (IMRT). Accordingly, a possibility of inadequate use of spatial information in gamma evaluation may exist for patient-specific IMRT QA. The influence of the phantom-alignment error on gamma evaluation can be demonstrated experimentally by using the gamma passing rate and the gamma value. However, such experimental methods have a limitation regarding the intrinsic verification of the influence of the phantom set-up error because experimentally measuring the phantom-alignment error accurately is impossible. To overcome this limitation, we aimed to verify the effect of the phantom set-up error within the gamma evaluation formula by using a Monte Carlo simulation. Artificial phantom set-up errors were simulated, and the concept of the true point (TP) was used to represent the actual coordinates of the measurement point for the mathematical modeling of these effects on the gamma. Using dose distributions acquired from the Monte Carlo simulation, performed gamma evaluations in 2D and 3D. The results of the gamma evaluations and the dose difference at the TP were classified to verify the degrees of dose reflection at the TP. The 2D and the 3D gamma errors were defined by comparing gamma values between the case of the imposed phantom set-up error and the TP in order to investigate the effect of the set-up error on the gamma value. According to the results for gamma errors, the 3D gamma evaluation reflected the dose at the TP better than the 2D one. Moreover, the gamma passing rates were higher for 3D than for 2D, as is widely known. Thus, the 3D gamma evaluation can increase the precision of patient-specific IMRT QA by applying stringent acceptance criteria and setting a reasonable action level for the 3D gamma passing rate.
Nizenkov, Paul; Noeding, Peter; Konopka, Martin; Fasoulas, Stefanos
2017-03-01
The in-house direct simulation Monte Carlo solver PICLas, which enables parallel, three-dimensional simulations of rarefied gas flows, is verified and validated. Theoretical aspects of the method and the employed schemes are briefly discussed. Considered cases include simple reservoir simulations and complex re-entry geometries, which were selected from literature and simulated with PICLas. First, the chemistry module is verified using simple numerical and analytical solutions. Second, simulation results of the rarefied gas flow around a 70° blunted-cone, the REX Free-Flyer as well as multiple points of the re-entry trajectory of the Orion capsule are presented in terms of drag and heat flux. A comparison to experimental measurements as well as other numerical results shows an excellent agreement across the different simulation cases. An outlook on future code development and applications is given.
Momennezhad, Mehdi; Nasseri, Shahrokh; Zakavi, Seyed Rasoul; Parach, Ali Asghar; Ghorbani, Mahdi; Asl, Ruhollah Ghahraman
2016-01-01
Single-photon emission computed tomography (SPECT)-based tracers are easily available and more widely used than positron emission tomography (PET)-based tracers, and SPECT imaging still remains the most prevalent nuclear medicine imaging modality worldwide. The aim of this study is to implement an image-based Monte Carlo method for patient-specific three-dimensional (3D) absorbed dose calculation in patients after injection of (99m)Tc-hydrazinonicotinamide (hynic)-Tyr(3)-octreotide as a SPECT radiotracer. (99m)Tc patient-speciﬁc S values and the absorbed doses were calculated with GATE code for each source-target organ pair in four patients who were imaged for suspected neuroendocrine tumors. Each patient underwent multiple whole-body planar scans as well as SPECT imaging over a period of 1-24 h after intravenous injection of (99m)hynic-Tyr(3)-octreotide. The patient-specific S values calculated by GATE Monte Carlo code and the corresponding S values obtained by MIRDOSE program differed within 4.3% on an average for self-irradiation, and differed within 69.6% on an average for cross-irradiation. However, the agreement between total organ doses calculated by GATE code and MIRDOSE program for all patients was reasonably well (percentage difference was about 4.6% on an average). Normal and tumor absorbed doses calculated with GATE were slightly higher than those calculated with MIRDOSE program. The average ratio of GATE absorbed doses to MIRDOSE was 1.07 ± 0.11 (ranging from 0.94 to 1.36). According to the results, it is proposed that when cross-organ irradiation is dominant, a comprehensive approach such as GATE Monte Carlo dosimetry be used since it provides more reliable dosimetric results.
Error propagation in the computation of volumes in 3D city models with the Monte Carlo method
Biljecki, F.; Ledoux, H.; Stoter, J.
2014-01-01
This paper describes the analysis of the propagation of positional uncertainty in 3D city models to the uncertainty in the computation of their volumes. Current work related to error propagation in GIS is limited to 2D data and 2D GIS operations, especially of rasters. In this research we have (1) d
Monte-Carlo Analysis of the Flavour Changing Neutral Current B \\to Gamma at Babar
Energy Technology Data Exchange (ETDEWEB)
Smith, D. [Imperial College, London (United Kingdom)
2001-09-01
The main theme of this thesis is a Monte-Carlo analysis of the rare Flavour Changing Neutral Current (FCNC) decay b→sγ. The analysis develops techniques that could be applied to real data, to discriminate between signal and background events in order to make a measurement of the branching ratio of this rare decay using the BaBar detector. Also included in this thesis is a description of the BaBar detector and the work I have undertaken in the development of the electronic data acquisition system for the Electromagnetic calorimeter (EMC), a subsystem of the BaBar detector.
First Monte Carlo analysis of fragmentation functions from single-inclusive e+e- annihilation
Sato, Nobuo; Ethier, J. J.; Melnitchouk, W.; Hirai, M.; Kumano, S.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration
2016-12-01
We perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive e+e- annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well constrained by the data and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.
First Monte Carlo analysis of fragmentation functions from single-inclusive $e^+ e^-$ annihilation
Sato, N; Melnitchouk, W; Hirai, M; Kumano, S; Accardi, A
2016-01-01
We perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.
A Monte Carlo Uncertainty Analysis of Ozone Trend Predictions in a Two Dimensional Model. Revision
Considine, D. B.; Stolarski, R. S.; Hollandsworth, S. M.; Jackman, C. H.; Fleming, E. L.
1998-01-01
We use Monte Carlo analysis to estimate the uncertainty in predictions of total O3 trends between 1979 and 1995 made by the Goddard Space Flight Center (GSFC) two-dimensional (2D) model of stratospheric photochemistry and dynamics. The uncertainty is caused by gas-phase chemical reaction rates, photolysis coefficients, and heterogeneous reaction parameters which are model inputs. The uncertainty represents a lower bound to the total model uncertainty assuming the input parameter uncertainties are characterized correctly. Each of the Monte Carlo runs was initialized in 1970 and integrated for 26 model years through the end of 1995. This was repeated 419 times using input parameter sets generated by Latin Hypercube Sampling. The standard deviation (a) of the Monte Carlo ensemble of total 03 trend predictions is used to quantify the model uncertainty. The 34% difference between the model trend in globally and annually averaged total O3 using nominal inputs and atmospheric trends calculated from Nimbus 7 and Meteor 3 total ozone mapping spectrometer (TOMS) version 7 data is less than the 46% calculated 1 (sigma), model uncertainty, so there is no significant difference between the modeled and observed trends. In the northern hemisphere midlatitude spring the modeled and observed total 03 trends differ by more than 1(sigma) but less than 2(sigma), which we refer to as marginal significance. We perform a multiple linear regression analysis of the runs which suggests that only a few of the model reactions contribute significantly to the variance in the model predictions. The lack of significance in these comparisons suggests that they are of questionable use as guides for continuing model development. Large model/measurement differences which are many multiples of the input parameter uncertainty are seen in the meridional gradients of the trend and the peak-to-peak variations in the trends over an annual cycle. These discrepancies unambiguously indicate model formulation
Full-Band Monte Carlo Analysis of Hot-Carrier Light Emission in GaAs
Ferretti, I.; Abramo, A.; Brunetti, R.; Jacobini, C.
1997-11-01
A computational analysis of light emission from hot carriers in GaAs due to direct intraband conduction-conduction (c-c) transitions is presented. The emission rates have been evaluated by means of a Full-Band Monte-Carlo simulator (FBMC). Results have been obtained for the emission rate as a function of the photon energy, for the emitted and absorbed light polarization along and perpendicular to the electric field direction. Comparison has been made with available experimental data in MESFETs.
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
DEFF Research Database (Denmark)
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying predicti...... was compared with that computed through linear analysis. The results were in good agreement, with the NSMC method estimate showing a slightly smaller range of prediction uncertainty than was calculated by the linear method.......Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction....... Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed...
Use of Monte Carlo simulations for cultural heritage X-ray fluorescence analysis
Energy Technology Data Exchange (ETDEWEB)
Brunetti, Antonio, E-mail: brunetti@uniss.it [Polcoming Department, University of Sassari (Italy); Golosio, Bruno [Polcoming Department, University of Sassari (Italy); Schoonjans, Tom; Oliva, Piernicola [Chemical and Pharmaceutical Department, University of Sassari (Italy)
2015-06-01
The analytical study of Cultural Heritage objects often requires merely a qualitative determination of composition and manufacturing technology. However, sometimes a qualitative estimate is not sufficient, for example when dealing with multilayered metallic objects. Under such circumstances a quantitative estimate of the chemical contents of each layer is sometimes required in order to determine the technology that was used to produce the object. A quantitative analysis is often complicated by the surface state: roughness, corrosion, incrustations that remain even after restoration, due to efforts to preserve the patina. Furthermore, restorers will often add a protective layer on the surface. In all these cases standard quantitative methods such as the fundamental parameter based approaches are generally not applicable. An alternative approach is presented based on the use of Monte Carlo simulations for quantitative estimation. - Highlights: • We present an application of fast Monte Carlo codes for Cultural Heritage artifact analysis. • We show applications to complex multilayer structures. • The methods allow estimating both the composition and the thickness of multilayer, such as bronze with patina. • The performance in terms of accuracy and uncertainty is described for the bronze samples.
Directory of Open Access Journals (Sweden)
Shchekoturova S. D.
2015-04-01
Full Text Available The article presents an analysis of an innovative activity of four Russian metallurgical enterprises: "Ruspolimet", JSC "Ural Smithy", JSC "Stupino Metallurgical Company", JSC "VSMPO" via mathematical modeling using Monte Carlo method. The results of the assessment of innovative activity of Russian metallurgical companies were identified in five years dynamics. An assessment of the current innovative activity was made by the calculation of an integral index of the innovative activity. The calculation was based on such six indicators as the proportion of staff employed in R & D; the level of development of new technology; the degree of development of new products; share of material resources for R & D; degree of security of enterprise intellectual property; the share of investment in innovative projects and it was analyzed from 2007 to 2011. On the basis of this data the integral indicator of the innovative activity of metallurgical companies was calculated by well-known method of weighting coefficients. The comparative analysis of integral indicators of the innovative activity of considered companies made it possible to range their level of the innovative activity and to characterize the current state of their business. Based on Monte Carlo method a variation interval of the integral indicator was obtained and detailed instructions to choose the strategy of the innovative development of metallurgical enterprises were given as well
Energy Technology Data Exchange (ETDEWEB)
O' Brien, M J; Procassini, R J; Joy, K I
2009-03-09
Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running.
Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area
Energy Technology Data Exchange (ETDEWEB)
Pratama, Cecep, E-mail: great.pratama@gmail.com [Graduate Program of Earth Science, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Meilano, Irwan [Geodesy Research Division, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Nugraha, Andri Dian [Global Geophysical Group, Faculty of Mining and Petroleum Engineering, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia)
2015-04-24
Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.
Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area
Pratama, Cecep; Meilano, Irwan; Nugraha, Andri Dian
2015-04-01
Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 - 0.8465 g with uncertainty between 0.0847 - 0.2389 g and COV between 17.7% - 29.8%.
Variational Monte Carlo analysis of Bose-Einstein condensation in a two-dimensional trap
Institute of Scientific and Technical Information of China (English)
Zheng Rong-Jie; Jin Jing; Tang Yi
2006-01-01
The ground-state properties of a system with a small number of interacting bosons over a wide range of densities are investigated. The system is confined in a two-dimensional isotropic harmonic trap, where the interaction between bosons is treated as a hard-core potential. By using variational Monte Carlo method, we diagonalize the one-body density matrix of the system to obtain the ground-state energy, condensate wavefunction and the condensate fraction.We find that in the dilute limit the depletion of central condensate in the 2D system is larger than in a 3D system for the same interaction strength; however as the density increases, the depletion at the centre of 2D trap will be equal to or even lower than that at the centre of 3D trap, which is in agreement with the anticipated in Thomas-Fermi approximation. In addition, in the 2D system the total condensate depletion is still larger than in a 3D system for the same scattering length.
Hoffmann, Max J; Matera, Sebastian
2016-01-01
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree of Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrat...
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Core-scale solute transport model selection using Monte Carlo analysis
Malama, Bwalya; James, Scott C
2013-01-01
Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.
Arampatzis, Georgios; Katsoulakis, Markos A
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
Arampatzis, Georgios; Katsoulakis, Markos A.
2014-03-01
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB
Energy Technology Data Exchange (ETDEWEB)
Davis JE, Eddy MJ, Sutton TM, Altomari TJ
2007-03-01
Solid modeling computer software systems provide for the design of three-dimensional solid models used in the design and analysis of physical components. The current state-of-the-art in solid modeling representation uses a boundary representation format in which geometry and topology are used to form three-dimensional boundaries of the solid. The geometry representation used in these systems is cubic B-spline curves and surfaces--a network of cubic B-spline functions in three-dimensional Cartesian coordinate space. Many Monte Carlo codes, however, use a geometry representation in which geometry units are specified by intersections and unions of half-spaces. This paper describes an algorithm for converting from a boundary representation to a half-space representation.
System Level Numerical Analysis of a Monte Carlo Simulation of the E. Coli Chemotaxis
Siettos, Constantinos I
2010-01-01
Over the past few years it has been demonstrated that "coarse timesteppers" establish a link between traditional numerical analysis and microscopic/ stochastic simulation. The underlying assumption of the associated lift-run-restrict-estimate procedure is that macroscopic models exist and close in terms of a few governing moments of microscopically evolving distributions, but they are unavailable in closed form. This leads to a system identification based computational approach that sidesteps the necessity of deriving explicit closures. Two-level codes are constructed; the outer code performs macroscopic, continuum level numerical tasks, while the inner code estimates -through appropriately initialized bursts of microscopic simulation- the quantities required for continuum numerics. Such quantities include residuals, time derivatives, and the action of coarse slow Jacobians. We demonstrate how these coarse timesteppers can be applied to perform equation-free computations of a kinetic Monte Carlo simulation of...
Ligand-receptor binding kinetics in surface plasmon resonance cells: A Monte Carlo analysis
Carroll, Jacob; Forsten-Williams, Kimberly; Täuber, Uwe C
2016-01-01
Surface plasmon resonance (SPR) chips are widely used to measure association and dissociation rates for the binding kinetics between two species of chemicals, e.g., cell receptors and ligands. It is commonly assumed that ligands are spatially well mixed in the SPR region, and hence a mean-field rate equation description is appropriate. This approximation however ignores the spatial fluctuations as well as temporal correlations induced by multiple local rebinding events, which become prominent for slow diffusion rates and high binding affinities. We report detailed Monte Carlo simulations of ligand binding kinetics in an SPR cell subject to laminar flow. We extract the binding and dissociation rates by means of the techniques frequently employed in experimental analysis that are motivated by the mean-field approximation. We find major discrepancies in a wide parameter regime between the thus extracted rates and the known input simulation values. These results underscore the crucial quantitative importance of s...
Analysis of Far-Field Radiation from Apertures Using Monte Carlo Integration Technique
Directory of Open Access Journals (Sweden)
Mohammad Mehdi Fakharian
2014-12-01
Full Text Available An integration technique based on the use of Monte Carlo Integration (MCI is proposed for the analysis of the electromagnetic radiation from apertures. The technique that can be applied to the calculation of the aperture antenna radiation patterns is the equivalence principle followed by physical optics, which can then be used to compute far-field antenna radiation patterns. However, this technique is often complex mathematically, because it requires integration over the closed surface. This paper presents an extremely simple formulation to calculate the far-fields from some types of aperture radiators by using MCI technique. The accuracy and effectiveness of this technique are demonstrated in three cases of radiation from the apertures and results are compared with the solutions using FE simulation and Gaussian quadrature rules.
Heat-Flux Analysis of Solar Furnace Using the Monte Carlo Ray-Tracing Method
Energy Technology Data Exchange (ETDEWEB)
Lee, Hyun Jin; Kim, Jong Kyu; Lee, Sang Nam; Kang, Yong Heack [Korea Institute of Energy Research, Daejeon (Korea, Republic of)
2011-10-15
An understanding of the concentrated solar flux is critical for the analysis and design of solar-energy-utilization systems. The current work focuses on the development of an algorithm that uses the Monte Carlo ray-tracing method with excellent flexibility and expandability; this method considers both solar limb darkening and the surface slope error of reflectors, thereby analyzing the solar flux. A comparison of the modeling results with measurements at the solar furnace in Korea Institute of Energy Research (KIER) show good agreement within a measurement uncertainty of 10%. The model evaluates the concentration performance of the KIER solar furnace with a tracking accuracy of 2 mrad and a maximum attainable concentration ratio of 4400 sun. Flux variations according to measurement position and flux distributions depending on acceptance angles provide detailed information for the design of chemical reactors or secondary concentrators.
Dubecký, Matúš; Jurečka, Petr; Mitas, Lubos; Hobza, Pavel; Otyepka, Michal
2014-01-01
Reliable theoretical predictions of noncovalent interaction energies, which are important e.g. in drug-design and hydrogen-storage applications, belong to longstanding challenges of contemporary quantum chemistry. In this respect, the fixed-node diffusion Monte Carlo (FN-DMC) is a promising alternative to the commonly used ``gold standard'' coupled-cluster CCSD(T)/CBS method for its benchmark accuracy and favourable scaling, in contrast to other correlated wave function approaches. This work is focused on the analysis of protocols and possible tradeoffs for FN-DMC estimations of noncovalent interaction energies and proposes a significantly more efficient yet accurate computational protocol using simplified explicit correlation terms. Its performance is illustrated on a number of weakly bound complexes, including water dimer, benzene/hydrogen, T-shape benzene dimer and stacked adenine-thymine DNA base pair complex. The proposed protocol achieves excellent agreement ($\\sim$0.2 kcal/mol) with respect to the reli...
Quasi-Monte Carlo Simulation-Based SFEM for Slope Reliability Analysis
Institute of Scientific and Technical Information of China (English)
Yu Yuzhen; Xie Liquan; Zhang Bingyin
2005-01-01
Considering the stochastic spatial variation of geotechnical parameters over the slope, a Stochastic Finite Element Method (SFEM) is established based on the combination of the Shear Strength Reduction (SSR) concept and quasi-Monte Carlo simulation. The shear strength reduction FEM is superior to the slice method based on the limit equilibrium theory in many ways, so it will be more powerful to assess the reliability of global slope stability when combined with probability theory. To illustrate the performance of the proposed method, it is applied to an example of simple slope. The results of simulation show that the proposed method is effective to perform the reliability analysis of global slope stability without presupposing a potential slip surface.
Outlier detection in near-infrared spectroscopic analysis by using Monte Carlo cross-validation
Institute of Scientific and Technical Information of China (English)
LIU ZhiChao; CAI WenSheng; SHAO XueGuang
2008-01-01
An outlier detection method is proposed for near-infrared spectral analysis. The underlying philosophy of the method is that, in random test (Monte Carlo) cross-validation, the probability of outliers pre-senting in good models with smaller prediction residual error sum of squares (PRESS) or in bad mod-els with larger PRESS should be obviously different from normal samples. The method builds a large number of PLS models by using random test cross-validation at first, then the models are sorted by the PRESS, and at last the outliers are recognized according to the accumulative probability of each sam-ple in the sorted models. For validation of the proposed method, four data sets, including three pub-lished data sets and a large data set of tobacco lamina, were investigated. The proposed method was proved to be highly efficient and veracious compared with the conventional leave-one-out (LOO) cross validation method.
Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms.
Rechner, Steffen; Berger, Annabell
2016-01-01
We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time.
Outlier detection in near-infrared spectroscopic analysis by using Monte Carlo cross-validation
Institute of Scientific and Technical Information of China (English)
2008-01-01
An outlier detection method is proposed for near-infrared spectral analysis. The underlying philosophy of the method is that,in random test(Monte Carlo) cross-validation,the probability of outliers presenting in good models with smaller prediction residual error sum of squares(PRESS) or in bad models with larger PRESS should be obviously different from normal samples. The method builds a large number of PLS models by using random test cross-validation at first,then the models are sorted by the PRESS,and at last the outliers are recognized according to the accumulative probability of each sample in the sorted models. For validation of the proposed method,four data sets,including three published data sets and a large data set of tobacco lamina,were investigated. The proposed method was proved to be highly efficient and veracious compared with the conventional leave-one-out(LOO) cross validation method.
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin, E-mail: nzcho@kaist.ac.kr [Korea Advanced Institute of Science and Technology 291 Daehak-ro, Yuseong-gu, Daejeon, Korea 305-701 (Korea, Republic of)
2015-12-31
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.
Nuclear reactor transient analysis via a quasi-static kinetics Monte Carlo method
Jo, YuGwon; Cho, Bumhee; Cho, Nam Zin
2015-12-01
The predictor-corrector quasi-static (PCQS) method is applied to the Monte Carlo (MC) calculation for reactor transient analysis. To solve the transient fixed-source problem of the PCQS method, fission source iteration is used and a linear approximation of fission source distributions during a macro-time step is introduced to provide delayed neutron source. The conventional particle-tracking procedure is modified to solve the transient fixed-source problem via MC calculation. The PCQS method with MC calculation is compared with the direct time-dependent method of characteristics (MOC) on a TWIGL two-group problem for verification of the computer code. Then, the results on a continuous-energy problem are presented.
Techno-economic and Monte Carlo probabilistic analysis of microalgae biofuel production system.
Batan, Liaw Y; Graff, Gregory D; Bradley, Thomas H
2016-11-01
This study focuses on the characterization of the technical and economic feasibility of an enclosed photobioreactor microalgae system with annual production of 37.85 million liters (10 million gallons) of biofuel. The analysis characterizes and breaks down the capital investment and operating costs and the production cost of unit of algal diesel. The economic modelling shows total cost of production of algal raw oil and diesel of $3.46 and $3.69 per liter, respectively. Additionally, the effects of co-products' credit and their impact in the economic performance of algal-to-biofuel system are discussed. The Monte Carlo methodology is used to address price and cost projections and to simulate scenarios with probabilities of financial performance and profits of the analyzed model. Different markets for allocation of co-products have shown significant shifts for economic viability of algal biofuel system.
Iba, Yukito
2000-01-01
``Extended Ensemble Monte Carlo''is a generic term that indicates a set of algorithms which are now popular in a variety of fields in physics and statistical information processing. Exchange Monte Carlo (Metropolis-Coupled Chain, Parallel Tempering), Simulated Tempering (Expanded Ensemble Monte Carlo), and Multicanonical Monte Carlo (Adaptive Umbrella Sampling) are typical members of this family. Here we give a cross-disciplinary survey of these algorithms with special emphasis on the great f...
The effect of load imbalances on the performance of Monte Carlo algorithms in LWR analysis
Energy Technology Data Exchange (ETDEWEB)
Siegel, A.R., E-mail: siegela@mcs.anl.gov [Argonne National Laboratory, Nuclear Engineering Division (United States); Argonne National Laboratory, Mathematics and Computer Science Division (United States); Smith, K., E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering (United States); Romano, P.K., E-mail: romano7@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering (United States); Forget, B., E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering (United States); Felker, K., E-mail: felker@mcs.anl.gov [Argonne National Laboratory, Mathematics and Computer Science Division (United States)
2013-02-15
A model is developed to predict the impact of particle load imbalances on the performance of domain-decomposed Monte Carlo neutron transport algorithms. Expressions for upper bound performance “penalties” are derived in terms of simple machine characteristics, material characterizations and initial particle distributions. The hope is that these relations can be used to evaluate tradeoffs among different memory decomposition strategies in next generation Monte Carlo codes, and perhaps as a metric for triggering particle redistribution in production codes.
Taheriyoun, Masoud; Moradinejad, Saber
2015-01-01
The reliability of a wastewater treatment plant is a critical issue when the effluent is reused or discharged to water resources. Main factors affecting the performance of the wastewater treatment plant are the variation of the influent, inherent variability in the treatment processes, deficiencies in design, mechanical equipment, and operational failures. Thus, meeting the established reuse/discharge criteria requires assessment of plant reliability. Among many techniques developed in system reliability analysis, fault tree analysis (FTA) is one of the popular and efficient methods. FTA is a top down, deductive failure analysis in which an undesired state of a system is analyzed. In this study, the problem of reliability was studied on Tehran West Town wastewater treatment plant. This plant is a conventional activated sludge process, and the effluent is reused in landscape irrigation. The fault tree diagram was established with the violation of allowable effluent BOD as the top event in the diagram, and the deficiencies of the system were identified based on the developed model. Some basic events are operator's mistake, physical damage, and design problems. The analytical method is minimal cut sets (based on numerical probability) and Monte Carlo simulation. Basic event probabilities were calculated according to available data and experts' opinions. The results showed that human factors, especially human error had a great effect on top event occurrence. The mechanical, climate, and sewer system factors were in subsequent tier. Literature shows applying FTA has been seldom used in the past wastewater treatment plant (WWTP) risk analysis studies. Thus, the developed FTA model in this study considerably improves the insight into causal failure analysis of a WWTP. It provides an efficient tool for WWTP operators and decision makers to achieve the standard limits in wastewater reuse and discharge to the environment.
Plante, Ianik; Ponomarev, Artem; Cucinotta, Francis A
2011-02-01
The description of energy deposition by high charge and energy (HZE) nuclei is of importance for space radiation risk assessment and due to their use in hadrontherapy. Such ions deposit a large fraction of their energy within the so-called core of the track and a smaller proportion in the penumbra (or track periphery). We study the stochastic patterns of the radial dependence of energy deposition using Monte Carlo track structure codes RITRACKS and RETRACKS, that were used to simulate HZE tracks and calculate energy deposition in voxels of 40 nm. The simulation of a (56)Fe(26+) ion of 1 GeV u(-1) revealed zones of high-energy deposition which maybe found as far as a few millimetres away from the track core in some simulations. The calculation also showed that ∼43 % of the energy was deposited in the penumbra. These 3D stochastic simulations combined with a visualisation interface are a powerful tool for biophysicists which may be used to study radiation-induced biological effects such as double strand breaks and oxidative damage and the subsequent cellular and tissue damage processing and signalling.
Li, Haoyuan
2017-01-16
The electrical properties of organic field-effect transistors (OFETs) are usually characterized by applying models initially developed for inorganic-based devices, which often implies the use of approximations that might be inappropriate for organic semiconductors. These approximations have brought limitations to the understanding of the device physics associated with organic materials. A strategy to overcome this issue is to establish straightforward connections between the macroscopic current characteristics and microscopic charge transport in OFETs. Here, a 3D kinetic Monte Carlo model is developed that goes beyond both the conventional assumption of zero channel thickness and the gradual channel approximation to simulate carrier transport and current. Using parallel computing and a new algorithm that significantly improves the evaluation of electric potential within the device, this methodology allows the simulation of micrometer-sized OFETs. The current characteristics of representative OFET devices are well reproduced, which provides insight into the validity of the gradual channel approximation in the case of OFETs, the impact of the channel thickness, and the nature of microscopic charge transport.
Derivation of landslide-triggering thresholds by Monte Carlo simulation and ROC analysis
Peres, David Johnny; Cancelliere, Antonino
2015-04-01
Rainfall thresholds of landslide-triggering are useful in early warning systems to be implemented in prone areas. Direct statistical analysis of historical records of rainfall and landslide data presents different shortcomings typically due to incompleteness of landslide historical archives, imprecise knowledge of the triggering instants, unavailability of a rain gauge located near the landslides, etc. In this work, a Monte Carlo approach to derive and evaluate landslide triggering thresholds is presented. Such an approach contributes to overcome some of the above mentioned shortcomings of direct empirical analysis of observed data. The proposed Monte Carlo framework consists in the combination of a rainfall stochastic model with hydrological and slope-stability model. Specifically, 1000-years long hourly synthetic rainfall and related slope stability factor of safety data are generated by coupling the Neyman-Scott rectangular pulses model with the TRIGRS unsaturated model (Baum et al., 2008) and a linear-reservoir water table recession model. Triggering and non-triggering rainfall events are then distinguished and analyzed to derive stochastic-input physically based thresholds that optimize the trade-off between correct and wrong predictions. For this purpose, receiver operating characteristic (ROC) indices are used. An application of the method to the highly landslide-prone area of the Peloritani mountains in north-eastern Sicily (Italy) is carried out. A threshold for the area is derived and successfully validated by comparison with thresholds proposed by other researchers. Moreover, the uncertainty in threshold derivation due to variability of rainfall intensity within events and to antecedent rainfall is investigated. Results indicate that variability of intensity during rainfall events influences significantly rainfall intensity and duration associated with landslide triggering. A representation of rainfall as constant-intensity hyetographs globally leads to
Improving Markov Chain Monte Carlo algorithms in LISA Pathfinder Data Analysis
Karnesis, N.; Nofrarias, M.; Sopuerta, C. F.; Lobo, A.
2012-06-01
The LISA Pathfinder mission (LPF) aims to test key technologies for the future LISA mission. The LISA Technology Package (LTP) on-board LPF will consist of an exhaustive suite of experiments and its outcome will be crucial for the future detection of gravitational waves. In order to achieve maximum sensitivity, we need to have an understanding of every instrument on-board and parametrize the properties of the underlying noise models. The Data Analysis team has developed algorithms for parameter estimation of the system. A very promising one implemented for LISA Pathfinder data analysis is the Markov Chain Monte Carlo. A series of experiments are going to take place during flight operations and each experiment is going to provide us with essential information for the next in the sequence. Therefore, it is a priority to optimize and improve our tools available for data analysis during the mission. Using a Bayesian framework analysis allows us to apply prior knowledge for each experiment, which means that we can efficiently use our prior estimates for the parameters, making the method more accurate and significantly faster. This, together with other algorithm improvements, will lead us to our main goal, which is no other than creating a robust and reliable tool for parameter estimation during the LPF mission.
Empirical Markov Chain Monte Carlo Bayesian analysis of fMRI data.
de Pasquale, F; Del Gratta, C; Romani, G L
2008-08-01
In this work an Empirical Markov Chain Monte Carlo Bayesian approach to analyse fMRI data is proposed. The Bayesian framework is appealing since complex models can be adopted in the analysis both for the image and noise model. Here, the noise autocorrelation is taken into account by adopting an AutoRegressive model of order one and a versatile non-linear model is assumed for the task-related activation. Model parameters include the noise variance and autocorrelation, activation amplitudes and the hemodynamic response function parameters. These are estimated at each voxel from samples of the Posterior Distribution. Prior information is included by means of a 4D spatio-temporal model for the interaction between neighbouring voxels in space and time. The results show that this model can provide smooth estimates from low SNR data while important spatial structures in the data can be preserved. A simulation study is presented in which the accuracy and bias of the estimates are addressed. Furthermore, some results on convergence diagnostic of the adopted algorithm are presented. To validate the proposed approach a comparison of the results with those from a standard GLM analysis, spatial filtering techniques and a Variational Bayes approach is provided. This comparison shows that our approach outperforms the classical analysis and is consistent with other Bayesian techniques. This is investigated further by means of the Bayes Factors and the analysis of the residuals. The proposed approach applied to Blocked Design and Event Related datasets produced reliable maps of activation.
Institute of Scientific and Technical Information of China (English)
ZHANG Jun; GUO Fan
2015-01-01
Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system’s dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system’s dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.
Core-scale solute transport model selection using Monte Carlo analysis
Malama, Bwalya; Kuhlman, Kristopher L.; James, Scott C.
2013-06-01
Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (3H) and sodium-22 (22Na ), and the retarding solute uranium-232 (232U). The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single-porosity and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows single-porosity and double-porosity models are structurally deficient, yielding late-time residual bias that grows with time. On the other hand, the multirate model yields unbiased predictions consistent with the late-time -5/2 slope diagnostic of multirate mass transfer. The analysis indicates the multirate model is better suited to describing core-scale solute breakthrough in the Culebra Dolomite than the other two models.
Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model.
Xin, Cao; Chongshi, Gu
2016-01-01
Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event considering both fuzziness and randomness of failure criterion, design parameters and measured data. Credibility distribution function is conducted as a novel way to represent uncertainty of influence factors of gravity dam stability. And combining with Monte Carlo simulation, corresponding calculation method and procedure are proposed. Based on a dam section, a detailed application of the modeling approach on risk calculation of both dam foundation and double sliding surfaces is provided. The results show that, the present method is feasible to be applied on analysis of stability failure risk for gravity dams. The risk assessment obtained can reflect influence of both sorts of uncertainty, and is suitable as an index value.
Gifford, Kent A; Wareing, Todd A; Failla, Gregory; Horton, John L; Eifel, Patricia J; Mourtada, Firas
2009-12-03
A patient dose distribution was calculated by a 3D multi-group S N particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs-137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi-group S N particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within +/- 3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than +/- 1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs-137 CT-based patient geometry. Our data showed that a three-group cross-section set is adequate for Cs-137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations.
Busse, Harald; Bublat, Martin; Ratering, Ralf; Rassek, Margarethe; Schwarzmaier, Hans-Joachim; Kahn, Thomas
2000-05-01
Minimally invasive techniques often require special biomedical monitoring schemes. In the case of laser coagulation of tumors accurate temperature mapping is desirable for therapy control. While magnetic resonance (MR)-based thermometry can easily yield qualitative results it is still difficult to calibrate this technique with independent temperature probes for the entire 2D field of view. Calculated temperature maps derived from Monte-Carlo simulations (MCS), on the other hand, are suitable for therapy planning and dosimetry but typically can not account for the extract individual tissue parameters and physiological changes upon heating. In this work, online thermometry was combined with MCS techniques to explore the feasibility and potential of such a biomodal approach for surgical assist systems. For the first time, the result of a 3D simulation were evaluated with MR techniques. An MR thermometry system was used to monitor the temperature evolution during laser-induced thermal treatment of bovine liver using a commercially available water-cooled applicator. A systematic comparison between MR-derived 2D temperature maps in different orientations and corresponding snapshots of a 3D MCS of the laser-induced processes is presented. The MCS is capable of resolving the complex temperature patterns observed in the MR-derived images and yields a good agreement with respect to absolute temperatures and damage volume dimensions. The observed quantitative agreement is around 10 degrees C and on the order of 10 percent, respectively. The integrated simulation-and-monitoring approach has the potential to improve surgical assistance during thermal interventions.
Image quality assessment of LaBr{sub 3}-based whole-body 3D PET scanners: a Monte Carlo evaluation
Energy Technology Data Exchange (ETDEWEB)
Surti, S [Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104 (United States); Karp, J S [Department of Radiology, University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104 (United States); Muehllehner, G [Philips Medical Systems, Philadelphia, PA 19104 (United States)
2004-10-07
The main thrust for this work is the investigation and design of a whole-body PET scanner based on new lanthanum bromide scintillators. We use Monte Carlo simulations to generate data for a 3D PET scanner based on LaBr{sub 3} detectors, and to assess the count-rate capability and the reconstructed image quality of phantoms with hot and cold spheres using contrast and noise parameters. Previously we have shown that LaBr{sub 3} has very high light output, excellent energy resolution and fast timing properties which can lead to the design of a time-of-flight (TOF) whole-body PET camera. The data presented here illustrate the performance of LaBr{sub 3} without the additional benefit of TOF information, although our intention is to develop a scanner with TOF measurement capability. The only drawbacks of LaBr{sub 3} are the lower stopping power and photo-fraction which affect both sensitivity and spatial resolution. However, in 3D PET imaging where energy resolution is very important for reducing scattered coincidences in the reconstructed image, the image quality attained in a non-TOF LaBr{sub 3} scanner can potentially equal or surpass that achieved with other high sensitivity scanners. Our results show that there is a gain in NEC arising from the reduced scatter and random fractions in a LaBr{sub 3} scanner. The reconstructed image resolution is slightly worse than a high-Z scintillator, but at increased count-rates, reduced pulse pileup leads to an image resolution similar to that of LSO. Image quality simulations predict reduced contrast for small hot spheres compared to an LSO scanner, but improved noise characteristics at similar clinical activity levels.
Ma, Jianzhong; Amos, Christopher I; Warwick Daw, E
2007-09-01
Although extended pedigrees are often sampled through probands with extreme levels of a quantitative trait, Markov chain Monte Carlo (MCMC) methods for segregation and linkage analysis have not been able to perform ascertainment corrections. Further, the extent to which ascertainment of pedigrees leads to biases in the estimation of segregation and linkage parameters has not been previously studied for MCMC procedures. In this paper, we studied these issues with a Bayesian MCMC approach for joint segregation and linkage analysis, as implemented in the package Loki. We first simulated pedigrees ascertained through individuals with extreme values of a quantitative trait in spirit of the sequential sampling theory of Cannings and Thompson [Cannings and Thompson [1977] Clin. Genet. 12:208-212]. Using our simulated data, we detected no bias in estimates of the trait locus location. However, in addition to allele frequencies, when the ascertainment threshold was higher than or close to the true value of the highest genotypic mean, bias was also found in the estimation of this parameter. When there were multiple trait loci, this bias destroyed the additivity of the effects of the trait loci, and caused biases in the estimation all genotypic means when a purely additive model was used for analyzing the data. To account for pedigree ascertainment with sequential sampling, we developed a Bayesian ascertainment approach and implemented Metropolis-Hastings updates in the MCMC samplers used in Loki. Ascertainment correction greatly reduced biases in parameter estimates. Our method is designed for multiple, but a fixed number of trait loci.
Monte Carlo analysis of a control technique for a tunable white lighting system
DEFF Research Database (Denmark)
Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen
2017-01-01
A simulated colour control mechanism for a multi-coloured LED lighting system is presented. The system achieves adjustable and stable white light output and allows for system-to-system reproducibility after application of the control mechanism. The control unit works using a pre-calibrated lookup...... table for an experimentally realized system, with a calibrated tristimulus colour sensor. A Monte Carlo simulation is used to examine the system performance concerning the variation of luminous flux and chromaticity of the light output. The inputs to the Monte Carlo simulation, are variations of the LED...... peak wavelength, the LED rated luminous flux bin, the influence of the operating conditions, ambient temperature, driving current, and the spectral response of the colour sensor. The system performance is investigated by evaluating the outputs from the Monte Carlo simulation. The outputs show...
Cluster Monte Carlo and numerical mean field analysis for the water liquid-liquid phase transition
Mazza, Marco G.; Stokely, Kevin; Strekalova, Elena G.; Stanley, H. Eugene; Franzese, Giancarlo
2009-04-01
Using Wolff's cluster Monte Carlo simulations and numerical minimization within a mean field approach, we study the low temperature phase diagram of water, adopting a cell model that reproduces the known properties of water in its fluid phases. Both methods allow us to study the thermodynamic behavior of water at temperatures, where other numerical approaches - both Monte Carlo and molecular dynamics - are seriously hampered by the large increase of the correlation times. The cluster algorithm also allows us to emphasize that the liquid-liquid phase transition corresponds to the percolation transition of tetrahedrally ordered water molecules.
A spectral analysis of the domain decomposed Monte Carlo method for linear systems
Energy Technology Data Exchange (ETDEWEB)
Slattery, Stuart R., E-mail: slatterysr@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States); Evans, Thomas M., E-mail: evanstm@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley Road, Oak Ridge, TN 37831 (United States); Wilson, Paul P.H., E-mail: wilsonp@engr.wisc.edu [University of Wisconsin - Madison, 1500 Engineering Dr., Madison, WI 53706 (United States)
2015-12-15
The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear operator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approximation and the mean chord approximation are applied to estimate the leakage fraction of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. In general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.
Bardenet, R.
2012-01-01
ISBN:978-2-7598-1032-1; International audience; Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC) methods. We give intuition on the theoretic...
Iiyama, Taku; Hagi, Kousuke; Urushibara, Takafumi; Ozeki, Sumio
2009-01-01
The intermolecular structure of C(2)H(5)OH molecules confined in slit-shaped graphitic micropore of activated carbon fiber was investigated by in situ X-ray diffraction (XRD) measurement and reverse Monte Carlo (RMC) analysis. The pseudo-3-dimensional intermolecular structure Of C(2)H(5)OH adsorbed in the micropores was determined by applying the RMC analysis to XRD data, assuming a simple slit-shaped space composed of double graphene sheets. The results were consistent with conventional Mont...
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
Shimada, M.; Yamada, Y.; Itoh, M.; Yatagai, T.
2001-09-01
Measurement of melanin and blood concentration in human skin is needed in the medical and the cosmetic fields because human skin colour is mainly determined by the colours of melanin and blood. It is difficult to measure these concentrations in human skin because skin has a multi-layered structure and scatters light strongly throughout the visible spectrum. The Monte Carlo simulation currently used for the analysis of skin colour requires long calculation times and knowledge of the specific optical properties of each skin layer. A regression analysis based on the modified Beer-Lambert law is presented as a method of measuring melanin and blood concentration in human skin in a shorter period of time and with fewer calculations. The accuracy of this method is assessed using Monte Carlo simulations.
Computer program uses Monte Carlo techniques for statistical system performance analysis
Wohl, D. P.
1967-01-01
Computer program with Monte Carlo sampling techniques determines the effect of a component part of a unit upon the overall system performance. It utilizes the full statistics of the disturbances and misalignments of each component to provide unbiased results through simulated random sampling.
A comparison of Bayesian and Monte Carlo sensitivity analysis for unmeasured confounding.
McCandless, Lawrence C; Gustafson, Paul
2017-04-06
Bias from unmeasured confounding is a persistent concern in observational studies, and sensitivity analysis has been proposed as a solution. In the recent years, probabilistic sensitivity analysis using either Monte Carlo sensitivity analysis (MCSA) or Bayesian sensitivity analysis (BSA) has emerged as a practical analytic strategy when there are multiple bias parameters inputs. BSA uses Bayes theorem to formally combine evidence from the prior distribution and the data. In contrast, MCSA samples bias parameters directly from the prior distribution. Intuitively, one would think that BSA and MCSA ought to give similar results. Both methods use similar models and the same (prior) probability distributions for the bias parameters. In this paper, we illustrate the surprising finding that BSA and MCSA can give very different results. Specifically, we demonstrate that MCSA can give inaccurate uncertainty assessments (e.g. 95% intervals) that do not reflect the data's influence on uncertainty about unmeasured confounding. Using a data example from epidemiology and simulation studies, we show that certain combinations of data and prior distributions can result in dramatic prior-to-posterior changes in uncertainty about the bias parameters. This occurs because the application of Bayes theorem in a non-identifiable model can sometimes rule out certain patterns of unmeasured confounding that are not compatible with the data. Consequently, the MCSA approach may give 95% intervals that are either too wide or too narrow and that do not have 95% frequentist coverage probability. Based on our findings, we recommend that analysts use BSA for probabilistic sensitivity analysis. Copyright © 2017 John Wiley & Sons, Ltd.
Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.
2017-03-01
Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10–20% the Day30 urethra D10 dose metric is higher by 4.2%–10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.
Lai, Bo-Lun; Sheu, Rong-Jiun; Lin, Uei-Tyng
2015-05-01
Monte Carlo simulations are generally considered the most accurate method for complex accelerator shielding analysis. Simplified models based on point-source line-of-sight approximation are often preferable in practice because they are intuitive and easy to use. A set of shielding data, including source terms and attenuation lengths for several common targets (iron, graphite, tissue, and copper) and shielding materials (concrete, iron, and lead) were generated by performing Monte Carlo simulations for 100-300 MeV protons. Possible applications and a proper use of the data set were demonstrated through a practical case study, in which shielding analysis on a typical proton treatment room was conducted. A thorough and consistent comparison between the predictions of our point-source line-of-sight model and those obtained by Monte Carlo simulations for a 360° dose distribution around the room perimeter showed that the data set can yield fairly accurate or conservative estimates for the transmitted doses, except for those near the maze exit. In addition, this study demonstrated that appropriate coupling between the generated source term and empirical formulae for radiation streaming can be used to predict a reasonable dose distribution along the maze. This case study proved the effectiveness and advantage of applying the data set to a quick shielding design and dose evaluation for proton therapy accelerators.
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-01
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
A Monte Carlo approach to Beryllium-7 solar neutrino analysis with KamLAND
Grant, Christopher Peter
Terrestrial measurements of neutrinos produced by the Sun have been of great interest for over half a century because of their ability to test the accuracy of solar models. The first solar neutrinos detected with KamLAND provided a measurement of the 8B solar neutrino interaction rate above an analysis threshold of 5.5 MeV. This work describes efforts to extend KamLAND's detection sensitivity to solar neutrinos below 1 MeV, more specifically, those produced with an energy of 0.862 MeV from the 7Be electron-capture decay. Many of the difficulties in measuring solar neutrinos below 1 MeV arise from backgrounds caused abundantly by both naturally occurring, and man-made, radioactive nuclides. The primary nuclides of concern were 210Bi, 85Kr, and 39Ar. Since May of 2007, the KamLAND experiment has undergone two separate purification campaigns. During both campaigns a total of 5.4 ktons (about 6440 m3) of scintillator was circulated through a purification system, which utilized fractional distillation and nitrogen purging. After the purification campaign, reduction factors of 1.5 x 103 for 210Bi and 6.5 x 10 4 for 85Kr were observed. The reduction of the backgrounds provided a unique opportunity to observe the 7Be solar neutrino rate in KamLAND. An observation required detailed knowledge of the detector response at low energies, and to accomplish this, a full detector Monte Carlo simulation, called KLG4sim, was utilized. The optical model of the simulation was tuned to match the detector response observed in data after purification, and the software was optimized for the simulation of internal backgrounds used in the 7Be solar neutrino analysis. The results of this tuning and estimates from simulations of the internal backgrounds and external backgrounds caused by radioactivity on the detector components are presented. The first KamLAND analysis based on Monte Carlo simulations in the energy region below 2 MeV is shown here. The comparison of the chi2 between the null
Performance Analysis of Korean Liquid metal type TBM based on Monte Carlo code
Energy Technology Data Exchange (ETDEWEB)
Kim, C. H.; Han, B. S.; Park, H. J.; Park, D. K. [Seoul National Univ., Seoul (Korea, Republic of)
2007-01-15
The objective of this project is to analyze a nuclear performance of the Korean HCML(Helium Cooled Molten Lithium) TBM(Test Blanket Module) which will be installed in ITER(International Thermonuclear Experimental Reactor). This project is intended to analyze a neutronic design and nuclear performances of the Korean HCML ITER TBM through the transport calculation of MCCARD. In detail, we will conduct numerical experiments for analyzing the neutronic design of the Korean HCML TBM and the DEMO fusion blanket, and improving the nuclear performances. The results of the numerical experiments performed in this project will be utilized further for a design optimization of the Korean HCML TBM. In this project, Monte Carlo transport calculations for evaluating TBR (Tritium Breeding Ratio) and EMF (Energy Multiplication factor) were conducted to analyze a nuclear performance of the Korean HCML TBM. The activation characteristics and shielding performances for the Korean HCML TBM were analyzed using ORIGEN and MCCARD. We proposed the neutronic methodologies for analyzing the nuclear characteristics of the fusion blanket, which was applied to the blanket analysis of a DEMO fusion reactor. In the results, the TBR of the Korean HCML ITER TBM is 0.1352 and the EMF is 1.362. Taking into account a limitation for the Li amount in ITER TBM, it is expected that tritium self-sufficiency condition can be satisfied through a change of the Li quantity and enrichment. In the results of activation and shielding analysis, the activity drops to 1.5% of the initial value and the decay heat drops to 0.02% of the initial amount after 10 years from plasma shutdown.
Uncertainty analysis in the simulation of an HPGe detector using the Monte Carlo Code MCNP5
Energy Technology Data Exchange (ETDEWEB)
Gallardo, Sergio; Pozuelo, Fausto; Querol, Andrea; Verdu, Gumersindo; Rodenas, Jose, E-mail: sergalbe@upv.es [Universitat Politecnica de Valencia, Valencia, (Spain). Instituto de Seguridad Industrial, Radiofisica y Medioambiental (ISIRYM); Ortiz, J. [Universitat Politecnica de Valencia, Valencia, (Spain). Servicio de Radiaciones. Lab. de Radiactividad Ambiental; Pereira, Claubia [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear
2013-07-01
A gamma spectrometer including an HPGe detector is commonly used for environmental radioactivity measurements. Many works have been focused on the simulation of the HPGe detector using Monte Carlo codes such as MCNP5. However, the simulation of this kind of detectors presents important difficulties due to the lack of information from manufacturers and due to loss of intrinsic properties in aging detectors. Some parameters such as the active volume or the Ge dead layer thickness are many times unknown and are estimated during simulations. In this work, a detailed model of an HPGe detector and a petri dish containing a certified gamma source has been done. The certified gamma source contains nuclides to cover the energy range between 50 and 1800 keV. As a result of the simulation, the Pulse Height Distribution (PHD) is obtained and the efficiency curve can be calculated from net peak areas and taking into account the certified activity of the source. In order to avoid errors due to the net area calculation, the simulated PHD is treated using the GammaVision software. On the other hand, it is proposed to use the Noether-Wilks formula to do an uncertainty analysis of model with the main goal of determining the efficiency curve of this detector and its associated uncertainty. The uncertainty analysis has been focused on dead layer thickness at different positions of the crystal. Results confirm the important role of the dead layer thickness in the low energy range of the efficiency curve. In the high energy range (from 300 to 1800 keV) the main contribution to the absolute uncertainty is due to variations in the active volume. (author)
Spray cooling simulation implementing time scale analysis and the Monte Carlo method
Kreitzer, Paul Joseph
Spray cooling research is advancing the field of heat transfer and heat rejection in high power electronics. Smaller and more capable electronics packages are producing higher amounts of waste heat, along with smaller external surface areas, and the use of active cooling is becoming a necessity. Spray cooling has shown extremely high levels of heat rejection, of up to 1000 W/cm 2 using water. Simulations of spray cooling are becoming more realistic, but this comes at a price. A previous researcher has used CFD to successfully model a single 3D droplet impact into a liquid film using the level set method. However, the complicated multiphysics occurring during spray impingement and surface interactions increases computation time to more than 30 days. Parallel processing on a 32 processor system has reduced this time tremendously, but still requires more than a day. The present work uses experimental and computational results in addition to numerical correlations representing the physics occurring on a heated impingement surface. The current model represents the spray behavior of a Spraying Systems FullJet 1/8-g spray nozzle. Typical spray characteristics are indicated as follows: flow rate of 1.05x10-5 m3/s, normal droplet velocity of 12 m/s, droplet Sauter mean diameter of 48 microm, and heat flux values ranging from approximately 50--100 W/cm2 . This produces non-dimensional numbers of: We 300--1350, Re 750--3500, Oh 0.01--0.025. Numerical and experimental correlations have been identified representing crater formation, splashing, film thickness, droplet size, and spatial flux distributions. A combination of these methods has resulted in a Monte Carlo spray impingement simulation model capable of simulating hundreds of thousands of droplet impingements or approximately one millisecond. A random sequence of droplet impingement locations and diameters is generated, with the proper radial spatial distribution and diameter distribution. Hence the impingement, lifetime
Monte-Carlo based Uncertainty Analysis For CO2 Laser Microchanneling Model
Prakash, Shashi; Kumar, Nitish; Kumar, Subrata
2016-09-01
CO2 laser microchanneling has emerged as a potential technique for the fabrication of microfluidic devices on PMMA (Poly-methyl-meth-acrylate). PMMA directly vaporizes when subjected to high intensity focused CO2 laser beam. This process results in clean cut and acceptable surface finish on microchannel walls. Overall, CO2 laser microchanneling process is cost effective and easy to implement. While fabricating microchannels on PMMA using a CO2 laser, the maximum depth of the fabricated microchannel is the key feature. There are few analytical models available to predict the maximum depth of the microchannels and cut channel profile on PMMA substrate using a CO2 laser. These models depend upon the values of thermophysical properties of PMMA and laser beam parameters. There are a number of variants of transparent PMMA available in the market with different values of thermophysical properties. Therefore, for applying such analytical models, the values of these thermophysical properties are required to be known exactly. Although, the values of laser beam parameters are readily available, extensive experiments are required to be conducted to determine the value of thermophysical properties of PMMA. The unavailability of exact values of these property parameters restrict the proper control over the microchannel dimension for given power and scanning speed of the laser beam. In order to have dimensional control over the maximum depth of fabricated microchannels, it is necessary to have an idea of uncertainty associated with the predicted microchannel depth. In this research work, the uncertainty associated with the maximum depth dimension has been determined using Monte Carlo method (MCM). The propagation of uncertainty with different power and scanning speed has been predicted. The relative impact of each thermophysical property has been determined using sensitivity analysis.
PROMSAR: A backward Monte Carlo spherical RTM for the analysis of DOAS remote sensing measurements
Palazzi, E.; Petritoli, A.; Giovanelli, G.; Kostadinov, I.; Bortoli, D.; Ravegnani, F.; Sackey, S. S.
A correct interpretation of diffuse solar radiation measurements made by Differential Optical Absorption Spectroscopy (DOAS) remote sensors require the use of radiative transfer models of the atmosphere. The simplest models consider radiation scattering in the atmosphere as a single scattering process. More realistic atmospheric models are those which consider multiple scattering and their application is useful and essential for the analysis of zenith and off-axis measurements regarding the lowest layers of the atmosphere, such as the boundary layer. These are characterized by the highest values of air density and quantities of particles and aerosols acting as scattering nuclei. A new atmospheric model, PROcessing of Multi-Scattered Atmospheric Radiation (PROMSAR), which includes multiple Rayleigh and Mie scattering, has recently been developed at ISAC-CNR. It is based on a backward Monte Carlo technique which is very suitable for studying the various interactions taking place in a complex and non-homogeneous system like the terrestrial atmosphere. PROMSAR code calculates the mean path of the radiation within each layer in which the atmosphere is sub-divided taking into account the large variety of processes that solar radiation undergoes during propagation through the atmosphere. This quantity is then employed to work out the Air Mass Factor (AMF) of several trace gases, to simulate in zenith and off-axis configurations their slant column amounts and to calculate the weighting functions from which informations about the gas vertical distribution is obtained using inversion methods. Results from the model, simulations and comparisons with actual slant column measurements are presented and discussed.
A Bayesian analysis of rare B decays with advanced Monte Carlo methods
Energy Technology Data Exchange (ETDEWEB)
Beaujean, Frederik
2012-11-12
Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit
Directory of Open Access Journals (Sweden)
Abraão Freires Saraiva Júnior
2011-03-01
Full Text Available The use of mathematical and statistical methods can help managers to deal with decision-making difficulties in the business environment. Some of these decisions are related to productive capacity optimization in order to obtain greater economic gains for the company. Within this perspective, this study aims to present the establishment of metrics to support economic decisions related to process or not orders in a company whose products have great variability in variable direct costs per unit that generates accounting uncertainties. To achieve this objective, is proposed a five-step method built from the integration of Management Accounting and Operations Research techniques, emphasizing the Monte Carlo simulation. The method is applied from a didactic example which uses real data achieved through a field research carried out in a plastic products industry that employ recycled material. Finally, it is concluded that the Monte Carlo simulation is effective for treating variable direct costs per unit variability and that the proposed method is useful to support decision-making related to order acceptance.A utilização de métodos matemáticos e estatísticos pode auxiliar gestores a lidar com dificuldades do processo de tomada de decisão no ambiente de negócios. Algumas dessas decisões estão relacionadas à otimização da utilização da capacidade produtiva visando a obtenção de melhores resultados econômicos para a empresa. Dentro dessa perspectiva, o presente trabalho objetiva apresentar o estabelecimento de métricas que deem suporte à decisão econômica de atender ou não a pedidos em uma empresa cujos produtos têm grande variabilidade de custos variáveis diretos unitários que gera incertezas contábeis. Para cumprir esse objetivo, é proposto um método em cinco etapas, construído a partir da integração de técnicas provindas da contabilidade gerencial e da pesquisa operacional, com destaque à simulação de Monte Carlo. O m
Monte Carlo analysis of the accelerator-driven system at Kyoto University Research Reactor Institute
Energy Technology Data Exchange (ETDEWEB)
Kim, Won Kyeong; Lee, Deok Jung [Nuclear Engineering Division, Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); Lee, Hyun Chul [VHTR Technology Development Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Pyeon, Cheol Ho [Nuclear Engineering Science Division, Kyoto University Research Reactor Institute, Osaka (Japan); Shin, Ho Cheol [Core and Fuel Analysis Group, Korea Hydro and Nuclear Power Central Research Institute, Daejeon (Korea, Republic of)
2016-04-15
An accelerator-driven system consists of a subcritical reactor and a controllable external neutron source. The reactor in an accelerator-driven system can sustain fission reactions in a subcritical state using an external neutron source, which is an intrinsic safety feature of the system. The system can provide efficient transmutations of nuclear wastes such as minor actinides and long-lived fission products and generate electricity. Recently at Kyoto University Research Reactor Institute (KURRI; Kyoto, Japan), a series of reactor physics experiments was conducted with the Kyoto University Critical Assembly and a Cockcroft-Walton type accelerator, which generates the external neutron source by deuterium-tritium reactions. In this paper, neutronic analyses of a series of experiments have been re-estimated by using the latest Monte Carlo code and nuclear data libraries. This feasibility study is presented through the comparison of Monte Carlo simulation results with measurements.
Evaluation of CASMO-3 and HELIOS for Fuel Assembly Analysis from Monte Carlo Code
Energy Technology Data Exchange (ETDEWEB)
Shim, Hyung Jin; Song, Jae Seung; Lee, Chung Chan
2007-05-15
This report presents a study comparing deterministic lattice physics calculations with Monte Carlo calculations for LWR fuel pin and assembly problems. The study has focused on comparing results from the lattice physics code CASMO-3 and HELIOS against those from the continuous-energy Monte Carlo code McCARD. The comparisons include k{sub inf}, isotopic number densities, and pin power distributions. The CASMO-3 and HELIOS calculations for the k{sub inf}'s of the LWR fuel pin problems show good agreement with McCARD within 956pcm and 658pcm, respectively. For the assembly problems with Gadolinia burnable poison rods, the largest difference between the k{sub inf}'s is 1463pcm with CASMO-3 and 1141pcm with HELIOS. RMS errors for the pin power distributions of CASMO-3 and HELIOS are within 1.3% and 1.5%, respectively.
An analysis on the theory of pulse oximetry by Monte Carlo simulation
Fan, Shangchun; Cai, Rui; Xing, Weiwei; Liu, Changting; Chen, Guangfei; Wang, Junfeng
2008-10-01
The pulse oximetry is a kind of electronic instrument that measures the oxygen saturation of arterial blood and pulse rate by non-invasive techniques. It enables prompt recognition of hypoxemia. In a conventional transmittance type pulse oximeter, the absorption of light by oxygenated and reduced hemoglobin is measured at two wavelength 660nm and 940nm. But the accuracy and measuring range of the pulse oximeter can not meet the requirement of clinical application. There are limitations in the theory of pulse oximetry, which is proved by Monte Carlo method. The mean paths are calculated in the Monte Carlo simulation. The results prove that the mean paths are not the same between the different wavelengths.
Monte Carlo Analysis of the Accelerator-Driven System at Kyoto University Research Reactor Institute
Directory of Open Access Journals (Sweden)
Wonkyeong Kim
2016-04-01
Full Text Available An accelerator-driven system consists of a subcritical reactor and a controllable external neutron source. The reactor in an accelerator-driven system can sustain fission reactions in a subcritical state using an external neutron source, which is an intrinsic safety feature of the system. The system can provide efficient transmutations of nuclear wastes such as minor actinides and long-lived fission products and generate electricity. Recently at Kyoto University Research Reactor Institute (KURRI; Kyoto, Japan, a series of reactor physics experiments was conducted with the Kyoto University Critical Assembly and a Cockcroft–Walton type accelerator, which generates the external neutron source by deuterium–tritium reactions. In this paper, neutronic analyses of a series of experiments have been re-estimated by using the latest Monte Carlo code and nuclear data libraries. This feasibility study is presented through the comparison of Monte Carlo simulation results with measurements.
Institute of Scientific and Technical Information of China (English)
雷咏梅; 蒋英; 冯捷
2002-01-01
This paper presents a new approach to parallelize 3D lattice Monte Carlo algorithms used in the numerical simulation of polymer on ZiQiang 2000-a cluster of symmetric multiprocessors (SMPs). The combined load for cell and energy calculations over the time step is balanced together to form a single spatial decomposition. Basic aspects and strategies of running Monte Carlo calculations on parallel computers are studied. Different steps involved in porting the software on a parallel architecture based on ZiQiang 2000 running under Linux and MPI are described briefly. It is found that parallelization becomes more advantageous when either the lattice is very large or the model contains many cells and chains.
Monte Carlo analysis of Gunn oscillations in narrow and wide band-gap asymmetric nanodiodes
González, T.; Iñiguez-de-la Torre, I.; Pardo, D.; Mateos, J.; Song, A. M.
2009-11-01
By means of Monte Carlo simulations we show the feasibility of asymmetric nonlinear planar nanodiodes for the development of Gunn oscillations. For channel lengths about 1 μm, oscillation frequencies around 100 GHz are predicted in InGaAs diodes, being significantly higher, around 400 GHz, in the case of GaN structures. The DC to AC conversion efficiency is found to be higher than 1% for the fundamental and second harmonic frequencies in GaN diodes.
MONTE CARLO ANALYSIS FOR PREDICTION OF NOISE FROM A CONSTRUCTION SITE
Directory of Open Access Journals (Sweden)
Zaiton Haron
2009-06-01
Full Text Available The large number of operations involving noisy machinery associated with construction site activities result in considerable variation in the noise levels experienced at receiver locations. This paper suggests an approach to predict noise levels generated from a site by using a Monte Carlo approach. This approach enables the determination of details regarding the statistical uncertainties associated with noise level predictions or temporal distributions. This technique could provide the basis for a generalised prediction technique and a simple noise management tool.
Monte Carlo analysis of a low power domino gate under parameter fluctuation
Institute of Scientific and Technical Information of China (English)
Wang Jinhui; Wu Wuchen; Gong Na; Hou Ligang; Peng Xiaohong; Gao Daming
2009-01-01
Using the multiple-parameter Monte Carlo method, the effectiveness of the dual threshold voltage technique (DTV) in low power domino logic design is analyzed. Simulation results indicate that under significant temperature and process fluctuations, DTV is still highly effective in reducing the total leakage and active power consumption for domino gates with speed loss. Also, regarding power and delay characteristics, different structure domino gates with DTV have different robustness against temperature and process fluctuation.
A Markov chain Monte Carlo method family in incomplete data analysis
Directory of Open Access Journals (Sweden)
Vasić Vladimir V.
2003-01-01
Full Text Available A Markov chain Monte Carlo method family is a collection of techniques for pseudorandom draws out of probability distribution function. In recent years, these techniques have been the subject of intensive interest of many statisticians. Roughly speaking, the essence of a Markov chain Monte Carlo method family is generating one or more values of a random variable Z, which is usually multidimensional. Let P(Z = f(Z denote a density function of a random variable Z, which we will refer to as a target distribution. Instead of sampling directly from the distribution f, we will generate [Z(1, Z(2..., Z(t,... ], in which each value is, in a way, dependant upon the previous value and where the stationary distribution will be a target distribution. For a sufficient value of t, Z(t will be approximately random sampling of the distribution f. A Markov chain Monte Carlo method family is useful when direct sampling is difficult, but when sampling of each value is not.
Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope
Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao
2015-10-01
X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through
Energy Technology Data Exchange (ETDEWEB)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
Monte Carlo homogenized limit analysis model for randomly assembled blocks in-plane loaded
Milani, Gabriele; Lourenço, Paulo B.
2010-11-01
A simple rigid-plastic homogenization model for the limit analysis of masonry walls in-plane loaded and constituted by the random assemblage of blocks with variable dimensions is proposed. In the model, blocks constituting a masonry wall are supposed infinitely resistant with a Gaussian distribution of height and length, whereas joints are reduced to interfaces with frictional behavior and limited tensile and compressive strength. Block by block, a representative element of volume (REV) is considered, constituted by a central block interconnected with its neighbors by means of rigid-plastic interfaces. The model is characterized by a few material parameters, is numerically inexpensive and very stable. A sub-class of elementary deformation modes is a-priori chosen in the REV, mimicking typical failures due to joints cracking and crushing. Masonry strength domains are obtained equating the power dissipated in the heterogeneous model with the power dissipated by a fictitious homogeneous macroscopic plate. Due to the inexpensiveness of the approach proposed, Monte Carlo simulations can be repeated on the REV in order to have a stochastic estimation of in-plane masonry strength at different orientations of the bed joints with respect to external loads accounting for the geometrical statistical variability of blocks dimensions. Two cases are discussed, the former consisting on full stochastic REV assemblages (obtained considering a random variability of both blocks height an length) and the latter assuming the presence of a horizontal alignment along bed joints, i.e. allowing blocks height variability only row by row. The case of deterministic blocks height (quasi-periodic texture) can be obtained as a subclass of this latter case. Masonry homogenized failure surfaces are finally implemented in an upper bound FE limit analysis code for the analysis at collapse of entire walls in-plane loaded. Two cases of engineering practice, consisting on the prediction of the failure
Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo
Herckenrath, Daan; Langevin, Christian D.; Doherty, John
2011-01-01
Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction uncertainty was tested for a synthetic saltwater intrusion model patterned after the Henry problem. Saltwater intrusion caused by a reduction in fresh groundwater discharge was simulated for 1000 randomly generated hydraulic conductivity distributions, representing a mildly heterogeneous aquifer. From these 1000 simulations, the hydraulic conductivity distribution giving rise to the most extreme case of saltwater intrusion was selected and was assumed to represent the "true" system. Head and salinity values from this true model were then extracted and used as observations for subsequent model calibration. Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability of the NSMC method to encompass the true prediction value. The addition of intrapilot point heterogeneity to the NSMC process was also tested. According to a variogram comparison, this provided the same scale of heterogeneity that was used to generate the truth. However, incorporation of intrapilot point variability did not make a noticeable difference to the uncertainty of the prediction. With this higher level of heterogeneity, however, the computational burden of
Yang, P.; Ng, T. L.; Yang, W.
2015-12-01
Effective water resources management depends on the reliable estimation of the uncertainty of drought events. Confidence intervals (CIs) are commonly applied to quantify this uncertainty. A CI seeks to be at the minimal length necessary to cover the true value of the estimated variable with the desired probability. In drought analysis where two or more variables (e.g., duration and severity) are often used to describe a drought, copulas have been found suitable for representing the joint probability behavior of these variables. However, the comprehensive assessment of the parameter uncertainties of copulas of droughts has been largely ignored, and the few studies that have recognized this issue have not explicitly compared the various methods to produce the best CIs. Thus, the objective of this study to compare the CIs generated using two widely applied uncertainty estimation methods, bootstrapping and Markov Chain Monte Carlo (MCMC). To achieve this objective, (1) the marginal distributions lognormal, Gamma, and Generalized Extreme Value, and the copula functions Clayton, Frank, and Plackett are selected to construct joint probability functions of two drought related variables. (2) The resulting joint functions are then fitted to 200 sets of simulated realizations of drought events with known distribution and extreme parameters and (3) from there, using bootstrapping and MCMC, CIs of the parameters are generated and compared. The effect of an informative prior on the CIs generated by MCMC is also evaluated. CIs are produced for different sample sizes (50, 100, and 200) of the simulated drought events for fitting the joint probability functions. Preliminary results assuming lognormal marginal distributions and the Clayton copula function suggest that for cases with small or medium sample sizes (~50-100), MCMC to be superior method if an informative prior exists. Where an informative prior is unavailable, for small sample sizes (~50), both bootstrapping and MCMC
Monte Carlo analysis of an ODE Model of the Sea Urchin Endomesoderm Network
Directory of Open Access Journals (Sweden)
Klipp Edda
2009-08-01
Full Text Available Abstract Background Gene Regulatory Networks (GRNs control the differentiation, specification and function of cells at the genomic level. The levels of interactions within large GRNs are of enormous depth and complexity. Details about many GRNs are emerging, but in most cases it is unknown to what extent they control a given process, i.e. the grade of completeness is uncertain. This uncertainty stems from limited experimental data, which is the main bottleneck for creating detailed dynamical models of cellular processes. Parameter estimation for each node is often infeasible for very large GRNs. We propose a method, based on random parameter estimations through Monte-Carlo simulations to measure completeness grades of GRNs. Results We developed a heuristic to assess the completeness of large GRNs, using ODE simulations under different conditions and randomly sampled parameter sets to detect parameter-invariant effects of perturbations. To test this heuristic, we constructed the first ODE model of the whole sea urchin endomesoderm GRN, one of the best studied large GRNs. We find that nearly 48% of the parameter-invariant effects correspond with experimental data, which is 65% of the expected optimal agreement obtained from a submodel for which kinetic parameters were estimated and used for simulations. Randomized versions of the model reproduce only 23.5% of the experimental data. Conclusion The method described in this paper enables an evaluation of network topologies of GRNs without requiring any parameter values. The benefit of this method is exemplified in the first mathematical analysis of the complete Endomesoderm Network Model. The predictions we provide deliver candidate nodes in the network that are likely to be erroneous or miss unknown connections, which may need additional experiments to improve the network topology. This mathematical model can serve as a scaffold for detailed and more realistic models. We propose that our method can
Quantum Monte Carlo simulation
Wang, Yazhen
2011-01-01
Contemporary scientific studies often rely on the understanding of complex quantum systems via computer simulation. This paper initiates the statistical study of quantum simulation and proposes a Monte Carlo method for estimating analytically intractable quantities. We derive the bias and variance for the proposed Monte Carlo quantum simulation estimator and establish the asymptotic theory for the estimator. The theory is used to design a computational scheme for minimizing the mean square er...
Monte Carlo transition probabilities
Lucy, L. B.
2001-01-01
Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...
Energy Technology Data Exchange (ETDEWEB)
Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Mendez V, R. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Av. Complutense 40, 28040 Madrid (Spain); Guzman G, K. A., E-mail: fermineutron@yahoo.com [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain)
2014-10-15
By means of Monte Carlo methods was characterized the neutrons field produced by calibration sources in the Neutron Standards Laboratory of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT). The laboratory has two neutron calibration sources: {sup 241}AmBe and {sup 252}Cf which are stored in a water pool and are placed on the calibration bench using controlled systems at distance. To characterize the neutrons field was built a three-dimensional model of the room where it was included the stainless steel bench, the irradiation table and the storage pool. The sources model included double encapsulated of steel, as cladding. With the purpose of determining the effect that produces the presence of the different components of the room, during the characterization the neutrons spectra, the total flow and the rapidity of environmental equivalent dose to 100 cm of the source were considered. The presence of the walls, floor and ceiling of the room is causing the most modification in the spectra and the integral values of the flow and the rapidity of environmental equivalent dose. (Author)
The applicability of certain Monte Carlo methods to the analysis of interacting polymers
Energy Technology Data Exchange (ETDEWEB)
Krapp, D.M. Jr. [Univ. of California, Berkeley, CA (United States)
1998-05-01
The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.
The applicability of certain Monte Carlo methods to the analysis of interacting polymers
Energy Technology Data Exchange (ETDEWEB)
Krapp, Jr., Donald M. [Univ. of California, Berkeley, CA (United States)
1998-05-01
The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at β_{crit} ~ 0.99, and to recalculate the known value of the critical exponent η ~ 0.58 of the system for β = β_{crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of η. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of β_{crit} using smaller values of N is 1.01 ± 0.01, and the estimate for η at this value of β is 0.59 ± 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.
Random vibration analysis of switching apparatus based on Monte Carlo method
Institute of Scientific and Technical Information of China (English)
ZHAI Guo-fu; CHEN Ying-hua; REN Wan-bin
2007-01-01
The performance in vibration environment of switching apparatus containing mechanical contact is an important element when judging the apparatus's reliability. A piecewise linear two-degrees-of-freedom mathematical model considering contact loss was built in this work, and the vibration performance of the model under random external Gaussian white noise excitation was investigated by using Monte Carlo simulation in Matlab/Simulink. Simulation showed that the spectral content and statistical characters of the contact force coincided strongly with reality. The random vibration character of the contact system was solved using time (numerical) domain simulation in this paper. Conclusions reached here are of great importance for reliability design of switching apparatus.
Markov chain Monte Carlo methods for statistical analysis of RF photonic devices.
Piels, Molly; Zibar, Darko
2016-02-08
The microwave reflection coefficient is commonly used to characterize the impedance of high-speed optoelectronic devices. Error and uncertainty in equivalent circuit parameters measured using this data are systematically evaluated. The commonly used nonlinear least-squares method for estimating uncertainty is shown to give unsatisfactory and incorrect results due to the nonlinear relationship between the circuit parameters and the measured data. Markov chain Monte Carlo methods are shown to provide superior results, both for individual devices and for assessing within-die variation.
Analysis of skin tissues spatial fluorescence distribution by the Monte Carlo simulation
Churmakov, D Y; Piletsky, S A; Greenhalgh, D A
2003-01-01
A novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account the spatial distribution of fluorophores, which would arise due to the structure of collagen fibres, compared to the epidermis and stratum corneum where the distribution of fluorophores is assumed to be homogeneous. The results of simulation suggest that distribution of auto- fluorescence is significantly suppressed in the near-infrared spectral region, whereas the spatial distribution of fluorescence sources within a sensor layer embedded in the epidermis is localized at an effective depth.
Iotti, Rita C.; Rossi, Fausto
2013-07-01
The operation of state-of-the-art optoelectronic quantum devices may be significantly affected by the presence of a nonequilibrium quasiparticle population to which the carrier subsystem is unavoidably coupled. This situation is particularly evident in new-generation semiconductor-heterostructure-based quantum emitters, operating both in the mid-infrared as well as in the terahertz (THz) region of the electromagnetic spectrum. In this paper, we present a Monte Carlo-based global kinetic approach, suitable for the investigation of a combined carrier-phonon nonequilibrium dynamics in realistic devices, and discuss its application with a prototypical resonant-phonon THz emitting quantum cascade laser design.
Energy Technology Data Exchange (ETDEWEB)
Capizzo, M.C.; Persano Adorno, D.; Zarcone, M. [Dipartimento di Fisica e Tecnologie Relative, Viale delle Scienze, Ed. 18, 90128, Palermo (Italy)
2006-08-15
This paper reports the results of Monte Carlo simulations of electronic noise in a GaAs bulk driven by two mixed high-frequency large-amplitude periodic electric fields. Under these conditions, the system response shows some peculiarities in the noise performance, such as a resonant-like enhancement of the spectra near the two frequencies of the applied fields. The relations among the frequency response and the velocity fluctuations as a function of intensities and frequencies of the sub-terahertz mixed excitation fields have been investigated. (copyright 2006 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)
A Monte Carlo analysis of health risks from PCB-contaminated mineral oil transformer fires.
Eschenroeder, A Q; Faeder, E J
1988-06-01
The objective of this study is the estimation of health hazards due to the inhalation of combustion products from accidental mineral oil transformer fires. Calculations of production, dispersion, and subsequent human intake of polychlorinated dibenzofurans (PCDFs) provide us with exposure estimates. PCDFs are believed to be the principal toxic products of the pyrolysis of polychlorinated biphenyls (PCBs) sometimes found as contaminants in transformer mineral oil. Cancer burdens and birth defect hazard indices are estimated from population data and exposure statistics. Monte Carlo-derived variational factors emphasize the statistics of uncertainty in the estimates of risk parameters. Community health issues are addressed and risks are found to be insignificant.
Directory of Open Access Journals (Sweden)
Kohei Arai
2013-01-01
Full Text Available Monte Carlo Ray Tracing: MCRT based sensitivity analysis of the geophysical parameters (the atmosphere and the ocean on Top of the Atmosphere: TOA radiance in visible to near infrared wavelength regions is conducted. As the results, it is confirmed that the influence due to the atmosphere is greater than that of the ocean. Scattering and absorption due to aerosol particles and molecules in the atmosphere is major contribution followed by water vapor and ozone while scattering due to suspended solid is dominant contribution for the ocean parameters.
Energy Technology Data Exchange (ETDEWEB)
McGraw, David [Desert Research Inst. (DRI), Reno, NV (United States); Hershey, Ronald L. [Desert Research Inst. (DRI), Reno, NV (United States)
2016-06-01
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little
Sanattalab, Ehsan; SalmanOgli, Ahmad; Piskin, Erhan
2016-04-01
We investigated the tumor-targeted nanoparticles that influence heat generation. We suppose that all nanoparticles are fully functionalized and can find the target using active targeting methods. Unlike the commonly used methods, such as chemotherapy and radiotherapy, the treatment procedure proposed in this study is purely noninvasive, which is considered to be a significant merit. It is found that the localized heat generation due to targeted nanoparticles is significantly higher than other areas. By engineering the optical properties of nanoparticles, including scattering, absorption coefficients, and asymmetry factor (cosine scattering angle), the heat generated in the tumor's area reaches to such critical state that can burn the targeted tumor. The amount of heat generated by inserting smart agents, due to the surface Plasmon resonance, will be remarkably high. The light-matter interactions and trajectory of incident photon upon targeted tissues are simulated by MIE theory and Monte Carlo method, respectively. Monte Carlo method is a statistical one by which we can accurately probe the photon trajectories into a simulation area.
Energy Technology Data Exchange (ETDEWEB)
Brandt, Gerhard [University of Oxford (United Kingdom); Krauss, Frank [IPPP Durham (United Kingdom); Lacker, Heiko; Leyton, Michael; Mamach, Martin; Schulz, Holger; Weyh, Daniel [Humboldt University of Berlin (Germany)
2012-07-01
Monte Carlo (MC) event generators are widely employed in the analysis of experimental data also for LHC in order to predict the features of observables and test analyses with them. These generators rely on phenomenological models containing various parameters which are free in certain ranges. Variations of these parameters relative to their default lead to uncertainties on the predictions of the event generators and, in turn, on the results of any experimental data analysis making use of the event generator. A Generalized method for quantifying a certain class of these generator based uncertainties will be presented in this talk. We study for the SHERPA event generator the effect on the analysis results from uncertainties in the choice of the merging and factorization scale. The quantification is done within an example ATLAS analysis measuring underlying event UE properties in Z-boson production limited to low transverse momenta (p{sub T}{sup Z}<3 GeV) of the Z-boson. The analysis extracts event-shape distributions from charged particles in the event that do not belong to the Z decay for generate Monte Carlo event and data which are unfolded back to the generator level.
Riolino, I.; Braccioli, M.; Lucci, L.; Palestri, P.; Esseni, D.; Fiegna, C.; Selmi, L.
2007-11-01
In this paper two Monte-Carlo simulators implementing different models for the influence of carrier quantization on the electrostatics and transport are used to analyze sub-100 nm double-gate SOI devices. To this purpose a new stable and efficient scheme to implement the contacts in the simulation of double-gate SOI devices is introduced first. Then, results in terms of drain current and microscopic quantities are compared, providing new insight on the limitation of a well assessed semiclassical transport simulation approach and a more rigorous multi-subband model.
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially.
Monte carlo simulation for soot dynamics
Zhou, Kun
2012-01-01
A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.
Comparative Monte Carlo analysis of InP- and GaN-based Gunn diodes
García, S.; Pérez, S.; Íñiguez-de-la-Torre, I.; Mateos, J.; González, T.
2014-01-01
In this work, we report on Monte Carlo simulations to study the capability to generate Gunn oscillations of diodes based on InP and GaN with around 1 μm active region length. We compare the power spectral density of current sequences in diodes with and without notch for different lengths and two doping profiles. It is found that InP structures provide 400 GHz current oscillations for the fundamental harmonic in structures without notch and around 140 GHz in notched diodes. On the other hand, GaN diodes can operate up to 300 GHz for the fundamental harmonic, and when the notch is effective, a larger number of harmonics, reaching the Terahertz range, with higher spectral purity than in InP diodes are generated. Therefore, GaN-based diodes offer a high power alternative for sub-millimeter wave Gunn oscillations.
Analysis of vibrational-translational energy transfer using the direct simulation Monte Carlo method
Boyd, Iain D.
1991-01-01
A new model is proposed for energy transfer between the vibrational and translational modes for use in the direct simulation Monte Carlo method (DSMC). The model modifies the Landau-Teller theory for a harmonic oscillator and the rate transition is related to an experimental correlation for the vibrational relaxation time. Assessment of the model is made with respect to three different computations: relaxation in a heat bath, a one-dimensional shock wave, and hypersonic flow over a two-dimensional wedge. These studies verify that the model achieves detailed balance, and excellent agreement with experimental data is obtained in the shock wave calculation. The wedge flow computation reveals that the usual phenomenological method for simulating vibrational nonequilibrium in the DSMC technique predicts much higher vibrational temperatures in the wake region.
Improving PWR core simulations by Monte Carlo uncertainty analysis and Bayesian inference
Castro, Emilio; Buss, Oliver; Garcia-Herranz, Nuria; Hoefer, Axel; Porsch, Dieter
2016-01-01
A Monte Carlo-based Bayesian inference model is applied to the prediction of reactor operation parameters of a PWR nuclear power plant. In this non-perturbative framework, high-dimensional covariance information describing the uncertainty of microscopic nuclear data is combined with measured reactor operation data in order to provide statistically sound, well founded uncertainty estimates of integral parameters, such as the boron letdown curve and the burnup-dependent reactor power distribution. The performance of this methodology is assessed in a blind test approach, where we use measurements of a given reactor cycle to improve the prediction of the subsequent cycle. As it turns out, the resulting improvement of the prediction quality is impressive. In particular, the prediction uncertainty of the boron letdown curve, which is of utmost importance for the planning of the reactor cycle length, can be reduced by one order of magnitude by including the boron concentration measurement information of the previous...
Korostil, Igor A; Peters, Gareth W; Cornebise, Julien; Regan, David G
2013-05-20
A Bayesian statistical model and estimation methodology based on forward projection adaptive Markov chain Monte Carlo is developed in order to perform the calibration of a high-dimensional nonlinear system of ordinary differential equations representing an epidemic model for human papillomavirus types 6 and 11 (HPV-6, HPV-11). The model is compartmental and involves stratification by age, gender and sexual-activity group. Developing this model and a means to calibrate it efficiently is relevant because HPV is a very multi-typed and common sexually transmitted infection with more than 100 types currently known. The two types studied in this paper, types 6 and 11, are causing about 90% of anogenital warts. We extend the development of a sexual mixing matrix on the basis of a formulation first suggested by Garnett and Anderson, frequently used to model sexually transmitted infections. In particular, we consider a stochastic mixing matrix framework that allows us to jointly estimate unknown attributes and parameters of the mixing matrix along with the parameters involved in the calibration of the HPV epidemic model. This matrix describes the sexual interactions between members of the population under study and relies on several quantities that are a priori unknown. The Bayesian model developed allows one to estimate jointly the HPV-6 and HPV-11 epidemic model parameters as well as unknown sexual mixing matrix parameters related to assortativity. Finally, we explore the ability of an extension to the class of adaptive Markov chain Monte Carlo algorithms to incorporate a forward projection strategy for the ordinary differential equation state trajectories. Efficient exploration of the Bayesian posterior distribution developed for the ordinary differential equation parameters provides a challenge for any Markov chain sampling methodology, hence the interest in adaptive Markov chain methods. We conclude with simulation studies on synthetic and recent actual data.
Hrivnacova, I; Berejnov, V V; Brun, R; Carminati, F; Fassò, A; Futo, E; Gheata, A; Caballero, I G; Morsch, Andreas
2003-01-01
The concept of Virtual Monte Carlo (VMC) has been developed by the ALICE Software Project to allow different Monte Carlo simulation programs to run without changing the user code, such as the geometry definition, the detector response simulation or input and output formats. Recently, the VMC classes have been integrated into the ROOT framework, and the other relevant packages have been separated from the AliRoot framework and can be used individually by any other HEP project. The general concept of the VMC and its set of base classes provided in ROOT will be presented. Existing implementations for Geant3, Geant4 and FLUKA and simple examples of usage will be described.
Energy Technology Data Exchange (ETDEWEB)
Park, Jae Phil; Bahn, Chi Bum [Pusan National University, Busan (Korea, Republic of)
2015-10-15
Probabilistic Fracture Mechanics (PFM) analysis was generally used to consider the scatter and uncertainty of parameters in complex phenomenon. Weld defects could be present in weld regions of Pressurized Water Reactors (PWRs), which cannot be considered by the typical fracture mechanics analysis. It is necessary to evaluate the effects of the pre-existing cracks in welds for the integrity of the welds. In this paper, PFM analysis for pre-existing cracks on Alloy 182 weld in PWR primary water environment was carried out using a Monte Carlo simulation. PFM analysis for pre-existing cracks on Alloy 182 weld in PWR primary water environment was carried out. It was shown that inspection decreases the gradient of the failure probability. And failure probability caused by the pre-existing cracks was stabilized after 15 years of operation time in this input condition.
Ishisaki, Y; Fujimoto, R; Ozaki, M; Ebisawa, K; Takahashi, T; Ueda, Y; Ogasaka, Y; Ptak, A; Mukai, K; Hamaguchi, K; Hirayama, M; Kotani, T; Kubo, H; Shibata, R; Ebara, M; Furuzawa, A; Iizuka, R; Inoue, H; Mori, H; Okada, S; Yokoyama, Y; Matsumoto, H; Nakajima, H; Yamaguchi, H; Anabuki, N; Tawa, N; Nagai, M; Katsuda, S; Hayashida, K; Bamba, A; Miller, E D; Sato, K; Yamasaki, N Y
2006-01-01
We have developed a framework for the Monte-Carlo simulation of the X-Ray Telescopes (XRT) and the X-ray Imaging Spectrometers (XIS) onboard Suzaku, mainly for the scientific analysis of spatially and spectroscopically complex celestial sources. A photon-by-photon instrumental simulator is built on the ANL platform, which has been successfully used in ASCA data analysis. The simulator has a modular structure, in which the XRT simulation is based on a ray-tracing library, while the XIS simulation utilizes a spectral "Redistribution Matrix File" (RMF), generated separately by other tools. Instrumental characteristics and calibration results, e.g., XRT geometry, reflectivity, mutual alignments, thermal shield transmission, build-up of the contamination on the XIS optical blocking filters (OBF), are incorporated as completely as possible. Most of this information is available in the form of the FITS (Flexible Image Transport System) files in the standard calibration database (CALDB). This simulator can also be ut...
Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah
2015-01-01
Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast.
Hueser, J. E.; Brock, F. J.; Melfi, L. T., Jr.; Bird, G. A.
1984-01-01
A new solution procedure has been developed to analyze the flowfield properties in the vicinity of the Inertial Upper Stage/Spacecraft during the 1st stage (SRMI) burn. Continuum methods are used to compute the nozzle flow and the exhaust plume flowfield as far as the boundary where the breakdown of translational equilibrium leaves these methods invalid. The Direct Simulation Monte Carlo (DSMC) method is applied everywhere beyond this breakdown boundary. The flowfield distributions of density, velocity, temperature, relative abundance, surface flux density, and pressure are discussed for each species for 2 sets of boundary conditions: vacuum and freestream. The interaction of the exhaust plume and the freestream with the spacecraft and the 2-stream direct interaction are discussed. The results show that the low density, high velocity, counter flowing free-stream substantially modifies the flowfield properties and the flux density incident on the spacecraft. A freestream bow shock is observed in the data, located forward of the high density region of the exhaust plume into which the freestream gas does not penetrate. The total flux density incident on the spacecraft, integrated over the SRM1 burn interval is estimated to be of the order of 10 to the 22nd per sq m (about 1000 atomic layers).
Monte Carlo analysis for finite-temperature magnetism of Nd2Fe14B permanent magnet
Toga, Yuta; Matsumoto, Munehisa; Miyashita, Seiji; Akai, Hisazumi; Doi, Shotaro; Miyake, Takashi; Sakuma, Akimasa
2016-11-01
We investigate the effects of magnetic inhomogeneities and thermal fluctuations on the magnetic properties of a rare-earth intermetallic compound, Nd2Fe14B . The constrained Monte Carlo method is applied to a Nd2Fe14B bulk system to realize the experimentally observed spin reorientation and magnetic anisotropy constants KmA(m =1 ,2 ,4 ) at finite temperatures. Subsequently, it is found that the temperature dependence of K1A deviates from the Callen-Callen law, K1A(T ) ∝M (T) 3 , even above room temperature, TR˜300 K , when the Fe (Nd) anisotropy terms are removed to leave only the Nd (Fe) anisotropy terms. This is because the exchange couplings between Nd moments and Fe spins are much smaller than those between Fe spins. It is also found that the exponent n in the external magnetic field Hext response of barrier height FB=FB0(1-Hext/H0) n is less than 2 in the low-temperature region below TR, whereas n approaches 2 when T >TR , indicating the presence of Stoner-Wohlfarth-type magnetization rotation. This reflects the fact that the magnetic anisotropy is mainly governed by the K1A term in the T >TR region.
Markov chain Monte Carlo based analysis of post-translationally modified VDAC gating kinetics.
Tewari, Shivendra G; Zhou, Yifan; Otto, Bradley J; Dash, Ranjan K; Kwok, Wai-Meng; Beard, Daniel A
2014-01-01
The voltage-dependent anion channel (VDAC) is the main conduit for permeation of solutes (including nucleotides and metabolites) of up to 5 kDa across the mitochondrial outer membrane (MOM). Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs). Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated, and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC) method. This developed method describes three distinct conducting states (open, half-open, and closed) of VDAC activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggest that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance.
Analysis of probabilistic short run marginal cost using Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Gutierrez-Alcaraz, G.; Navarrete, N.; Tovar-Hernandez, J.H.; Fuerte-Esquivel, C.R. [Inst. Tecnologico de Morelia, Michoacan (Mexico). Dept. de Ing. Electrica y Electronica; Mota-Palomino, R. [Inst. Politecnico Nacional (Mexico). Escuela Superior de Ingenieria Mecanica y Electrica
1999-11-01
The structure of the Electricity Supply Industry is undergoing dramatic changes to provide new services options. The main aim of this restructuring is allowing generating units the freedom of selling electricity to anybody they wish at a price determined by market forces. Several methodologies have been proposed in order to quantify different costs associated with those new services offered by electrical utilities operating under a deregulated market. The new wave of pricing is heavily influenced by economic principles designed to price products to elastic market segments on the basis of marginal costs. Hence, spot pricing provides the economic structure for many of new services. At the same time, the pricing is influenced by uncertainties associated to the electric system state variables which defined its operating point. In this paper, nodal probabilistic short run marginal costs are calculated, considering as random variables the load, the production cost and availability of generators. The effect of the electrical network is evaluated taking into account linearized models. A thermal economic dispatch is used to simulate each operational condition generated by Monte Carlo method on small fictitious power system in order to assess the effect of the random variables on the energy trading. First, this is carry out by introducing each random variable one by one, and finally considering the random interaction of all of them.
Institute of Scientific and Technical Information of China (English)
TIAN Bao-guo; SI Ji-tao; ZHAO Yan; WANG Hong-tao; HAO Ji-ming
2007-01-01
This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases.
Energy Technology Data Exchange (ETDEWEB)
Espinosa-Paredes, Gilberto, E-mail: gepe@xanum.uam.m [Area de Ingenieria en Recursos Energeticos, Universidad Autonoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco, 186, Col. Vicentina, Mexico D.F., 09340 (Mexico); Verma, Surendra P. [Centro de Investigacion en Energia, Universidad Nacional Autonoma de Mexico, Priv. Xochicalco s/no., Col Centro, Apartado Postal 34, Temixco 62580 (Mexico); Vazquez-Rodriguez, Alejandro [Area de Ingenieria en Recursos Energeticos, Universidad Autonoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco, 186, Col. Vicentina, Mexico D.F., 09340 (Mexico); Nunez-Carrera, Alejandro [Comision Nacional de Seguridad Nuclear y Salvaguardias, Doctor Barragan 779, Col. Narvarte, Mexico D.F. 03020 (Mexico)
2010-05-15
Our aim was to evaluate the sensitivity and uncertainty of mass flow rate in the core on the performance of natural circulation boiling water reactor (NCBWR). This analysis was carried out through Monte Carlo simulations of sizes up to 40,000, and the size, i.e., repetition of 25,000 was considered as valid for routine applications. A simplified boiling water reactor (SBWR) was used as an application example of Monte Carlo method. The numerical code to simulate the SBWR performance considers a one-dimensional thermo-hydraulics model along with non-equilibrium thermodynamics and non-homogeneous flow approximation, one-dimensional fuel rod heat transfer. The neutron processes were simulated with a point reactor kinetics model with six groups of delayed neutrons. The sensitivity was evaluated in terms of 99% confidence intervals of the mean to understand the range of mean values that may represent the entire statistical population of performance variables. The regression analysis with mass flow rate as the predictor variable showed statistically valid linear correlations for both neutron flux and fuel temperature and quadratic relationship for the void fraction. No statistically valid correlation was observed for the total heat flux as a function of the mass flow rate although heat flux at individual nodes was positively correlated with this variable. These correlations are useful for the study, analysis and design of any NCBWR. The uncertainties were propagated as follows: for 10% change in the mass flow rate in the core, the responses for neutron power, total heat flux, average fuel temperature and average void fraction changed by 8.74%, 7.77%, 2.74% and 0.58%, respectively.
Monte Carlo Analysis of the Lévy Stability and Multi-fractal Spectrum in e+e- Collisions
Institute of Scientific and Technical Information of China (English)
陈刚; 刘连寿
2002-01-01
The Lévy stability analysis is carried out for e+e- collisions at Z0 mass using the Monte Carlo method. The Lévy index μ is found to be μ = 1.701 ± 0.043. The self-slmilar generalized dimensions D(q) and multi-fractal spectrum f(а) are presented. The Rényi dimension D(q) decreases with increasing q. The self-similar multifractal spectrum is a convex curve with a maximum at q = 0, а = 1.169 ± 0.011. The right-hand side of the spectrum, corresponding to negative values of q, is obtained through analytical continuation.
Nestler, Steffen
2013-02-01
We conducted a Monte Carlo study to investigate the performance of the polychoric instrumental variable estimator (PIV) in comparison to unweighted least squares (ULS) and diagonally weighted least squares (DWLS) in the estimation of a confirmatory factor analysis model with dichotomous indicators. The simulation involved 144 conditions (1,000 replications per condition) that were defined by a combination of (a) two types of latent factor models, (b) four sample sizes (100, 250, 500, 1,000), (c) three factor loadings (low, moderate, strong), (d) three levels of non-normality (normal, moderately, and extremely non-normal), and (e) whether the factor model was correctly specified or misspecified. The results showed that when the model was correctly specified, PIV produced estimates that were as accurate as ULS and DWLS. Furthermore, the simulation showed that PIV was more robust to structural misspecifications than ULS and DWLS.
Energy Technology Data Exchange (ETDEWEB)
Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh
2016-09-16
Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker and system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.
Healey, S. P.; Patterson, P.; Garrard, C.
2014-12-01
Altered disturbance regimes are likely a primary mechanism by which a changing climate will affect storage of carbon in forested ecosystems. Accordingly, the National Forest System (NFS) has been mandated to assess the role of disturbance (harvests, fires, insects, etc.) on carbon storage in each of its planning units. We have developed a process which combines 1990-era maps of forest structure and composition with high-quality maps of subsequent disturbance type and magnitude to track the impact of disturbance on carbon storage. This process, called the Forest Carbon Management Framework (ForCaMF), uses the maps to apply empirically calibrated carbon dynamics built into a widely used management tool, the Forest Vegetation Simulator (FVS). While ForCaMF offers locally specific insights into the effect of historical or hypothetical disturbance trends on carbon storage, its dependence upon the interaction of several maps and a carbon model poses a complex challenge in terms of tracking uncertainty. Monte Carlo analysis is an attractive option for tracking the combined effects of error in several constituent inputs as they impact overall uncertainty. Monte Carlo methods iteratively simulate alternative values for each input and quantify how much outputs vary as a result. Variation of each input is controlled by a Probability Density Function (PDF). We introduce a technique called "PDF Weaving," which constructs PDFs that ensure that simulated uncertainty precisely aligns with uncertainty estimates that can be derived from inventory data. This hard link with inventory data (derived in this case from FIA - the US Forest Service Forest Inventory and Analysis program) both provides empirical calibration and establishes consistency with other types of assessments (e.g., habitat and water) for which NFS depends upon FIA data. Results from the NFS Northern Region will be used to illustrate PDF weaving and insights gained from ForCaMF about the role of disturbance in carbon
Monte Carlo and nonlinearities
Dauchet, Jérémi; Blanco, Stéphane; Caliot, Cyril; Charon, Julien; Coustet, Christophe; Hafi, Mouna El; Eymet, Vincent; Farges, Olivier; Forest, Vincent; Fournier, Richard; Galtier, Mathieu; Gautrais, Jacques; Khuong, Anaïs; Pelissier, Lionel; Piaud, Benjamin; Roger, Maxime; Terrée, Guillaume; Weitz, Sebastian
2016-01-01
The Monte Carlo method is widely used to numerically predict systems behaviour. However, its powerful incremental design assumes a strong premise which has severely limited application so far: the estimation process must combine linearly over dimensions. Here we show that this premise can be alleviated by projecting nonlinearities on a polynomial basis and increasing the configuration-space dimension. Considering phytoplankton growth in light-limited environments, radiative transfer in planetary atmospheres, electromagnetic scattering by particles and concentrated-solar-power-plant productions, we prove the real world usability of this advance on four test-cases that were so far regarded as impracticable by Monte Carlo approaches. We also illustrate an outstanding feature of our method when applied to sharp problems with interacting particles: handling rare events is now straightforward. Overall, our extension preserves the features that made the method popular: addressing nonlinearities does not compromise o...
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong
2016-08-01
A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.
Energy Technology Data Exchange (ETDEWEB)
Gallardo, S.; Querol, A.; Ortiz, J.; Rodenas, J.; Verdu, G.
2014-07-01
In this paper the use of Monte Carlo code SWORD-GEANT is proposed to simulate an ultra pure germanium detector High Purity Germanium detector (HPGe) detector ORTEC specifically GMX40P4, coaxial geometry. (Author)
Energy Technology Data Exchange (ETDEWEB)
Albright, N; Bergstrom, P M; Daly, T P; Descalle, M; Garrett, D; House, R K; Knapp, D K; May, S; Patterson, R W; Siantar, C L; Verhey, L; Walling, R S; Welczorek, D
1999-07-01
PEREGRINE is a 3D Monte Carlo dose calculation system designed to serve as a dose calculation engine for clinical radiation therapy treatment planning systems. Taking advantage of recent advances in low-cost computer hardware, modern multiprocessor architectures and optimized Monte Carlo transport algorithms, PEREGRINE performs mm-resolution Monte Carlo calculations in times that are reasonable for clinical use. PEREGRINE has been developed to simulate radiation therapy for several source types, including photons, electrons, neutrons and protons, for both teletherapy and brachytherapy. However the work described in this paper is limited to linear accelerator-based megavoltage photon therapy. Here we assess the accuracy, reliability, and added value of 3D Monte Carlo transport for photon therapy treatment planning. Comparisons with clinical measurements in homogeneous and heterogeneous phantoms demonstrate PEREGRINE's accuracy. Studies with variable tissue composition demonstrate the importance of material assignment on the overall dose distribution. Detailed analysis of Monte Carlo results provides new information for radiation research by expanding the set of observables.
Monte Carlo Error Analysis Applied to Core Formation: The Single-stage Model Revived
Cottrell, E.; Walter, M. J.
2009-12-01
The last decade has witnessed an explosion of studies that scrutinize whether or not the siderophile element budget of the modern mantle can plausibly be explained by metal-silicate equilibration in a deep magma ocean during core formation. The single-stage equilibrium scenario is seductive because experiments that equilibrate metal and silicate can then serve as a proxy for the early earth, and the physical and chemical conditions of core formation can be identified. Recently, models have become more complex as they try to accommodate the proliferation of element partitioning data sets, each of which sets its own limits on the pressure, temperature, and chemistry of equilibration. The ability of single stage models to explain mantle chemistry has subsequently been challenged, resulting in the development of complex multi-stage core formation models. Here we show that the extent to which extant partitioning data are consistent with single-stage core formation depends heavily upon (1) the assumptions made when regressing experimental partitioning data (2) the certainty with which regression coefficients are known and (3) the certainty with which the core/mantle concentration ratios of the siderophile elements are known. We introduce a Monte Carlo algorithm coded in MATLAB that samples parameter space in pressure and oxygen fugacity for a given mantle composition (nbo/t) and liquidus, and returns the number of equilibrium single-stage liquidus “solutions” that are permissible, taking into account the uncertainty in regression parameters and range of acceptable core/mantle ratios. Here we explore the consequences of regression parameter uncertainty and the impact of regression construction on model outcomes. We find that the form of the partition coefficient (Kd with enforced valence state, or D) and the handling of the temperature effect (based on 1-atm free energy data or high P-T experimental observations) critically affects model outcomes. We consider the most
Energy Technology Data Exchange (ETDEWEB)
Barbeiro, A.R.; Ureba, A.; Baeza, J.A.; Jimenez-Ortega, E.; Plaza, A. Leal [Universidad de Sevilla, Departamento de Fisiologia Medica y Biofisica, Seville (Spain); Linares, R. [Hospital Infanta Luisa, Servicio de Radiofisica, Seville (Spain); Mateos, J.C.; Velazquez, S. [Hospital Universitario Virgen del Rocio, Servicio de Radiofisica, Seville (Spain)
2015-06-15
Purpose: VMAT involves two main sources of uncertainty: one related to the dose calculation accuracy, and the other linked to the continuous delivery of a discrete calculation. The purpose of this work is to present QuAArC, an alternative VMAT QA system to control and potentially reduce these uncertainties. Methods: An automated MC simulation of log files, recorded during VMAT treatment plans delivery, was implemented in order to simulate the actual treatment parameters. The linac head models and the phase-space data of each Control Point (CP) were simulated using the EGSnrc/BEAMnrc MC code, and the corresponding dose calculation was carried out by means of BEAMDOSE, a DOSXYZnrc code modification. A cylindrical phantom was specifically designed to host films rolled up at different radial distances from the isocenter, for a 3D and continuous dosimetric verification. It also allows axial and/or coronal films and point measurements with several types of ion chambers at different locations. Specific software was developed in MATLAB in order to process and evaluate the dosimetric measurements, which incorporates the analysis of dose distributions, profiles, dose difference maps, and 2D/3D gamma index. It is also possible to obtain the experimental DVH reconstructed on the patient CT, by an optimization method to find the individual contribution corresponding to each CP on the film, taking into account the total measured dose, and the corresponding CP dose calculated by MC. Results: The QuAArC system showed high reproducibility of measurements, and consistency with the results obtained with the commercial system implemented in the verification of the evaluated treatment plans. Conclusion: A VMAT QA system based on MC simulation and high resolution dosimetry with film has been developed for treatment verification. It shows to be useful for the study of the real VMAT capabilities, and also for linac commissioning and evaluation of other verification devices.
Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M
2012-08-01
This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve.
Monte Carlo modelling of TRIGA research reactor
El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.
2010-10-01
The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.
Armaghani, Danial Jahed; Mahdiyar, Amir; Hasanipanah, Mahdi; Faradonbeh, Roohollah Shirani; Khandelwal, Manoj; Amnieh, Hassan Bakhshandeh
2016-09-01
Flyrock is considered as one of the main causes of human injury, fatalities, and structural damage among all undesirable environmental impacts of blasting. Therefore, it seems that the proper prediction/simulation of flyrock is essential, especially in order to determine blast safety area. If proper control measures are taken, then the flyrock distance can be controlled, and, in return, the risk of damage can be reduced or eliminated. The first objective of this study was to develop a predictive model for flyrock estimation based on multiple regression (MR) analyses, and after that, using the developed MR model, flyrock phenomenon was simulated by the Monte Carlo (MC) approach. In order to achieve objectives of this study, 62 blasting operations were investigated in Ulu Tiram quarry, Malaysia, and some controllable and uncontrollable factors were carefully recorded/calculated. The obtained results of MC modeling indicated that this approach is capable of simulating flyrock ranges with a good level of accuracy. The mean of simulated flyrock by MC was obtained as 236.3 m, while this value was achieved as 238.6 m for the measured one. Furthermore, a sensitivity analysis was also conducted to investigate the effects of model inputs on the output of the system. The analysis demonstrated that powder factor is the most influential parameter on fly rock among all model inputs. It is noticeable that the proposed MR and MC models should be utilized only in the studied area and the direct use of them in the other conditions is not recommended.
Messerly, Richard A; Rowley, Richard L; Knotts, Thomas A; Wilding, W Vincent
2015-09-14
A rigorous statistical analysis is presented for Gibbs ensemble Monte Carlo simulations. This analysis reduces the uncertainty in the critical point estimate when compared with traditional methods found in the literature. Two different improvements are recommended due to the following results. First, the traditional propagation of error approach for estimating the standard deviations used in regression improperly weighs the terms in the objective function due to the inherent interdependence of the vapor and liquid densities. For this reason, an error model is developed to predict the standard deviations. Second, and most importantly, a rigorous algorithm for nonlinear regression is compared to the traditional approach of linearizing the equations and propagating the error in the slope and the intercept. The traditional regression approach can yield nonphysical confidence intervals for the critical constants. By contrast, the rigorous algorithm restricts the confidence regions to values that are physically sensible. To demonstrate the effect of these conclusions, a case study is performed to enhance the reliability of molecular simulations to resolve the n-alkane family trend for the critical temperature and critical density.
Ng, C M
2013-10-01
The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.
Xu, Wenhao; Yang, Jichu; Hu, Yinyu
2009-04-01
Configurational-bias Monte Carlo simulations in the isobaric-isothermal ensemble using the TraPPE-UA force field were performed to study the microscopic structures and molecular interactions of mixtures containing supercritical carbon dioxide (scCO(2)) and ethanol (EtOH). The binary vapor-liquid coexisting curves were calculated at 298.17, 333.2, and 353.2 K and are in excellent agreement with experimental results. For the first time, three important interactions, i.e., EtOH-EtOH hydrogen bonding, EtOH-CO(2) hydrogen bonding, and EtOH-CO(2) electron donor-acceptor (EDA) bonding, in the mixtures were fully analyzed and compared. The EtOH mole fraction, temperature, and pressure effect on the three interactions was investigated and then explained by the competition of interactions between EtOH and CO(2) molecules. Analysis of the microscopic structures indicates a strong preference for the formation of EtOH-CO(2) hydrogen-bonded tetramers and pentamers at higher EtOH compositions. The distribution of aggregation sizes and types shows that a very large EtOH-EtOH hydrogen-bonded network exists in the mixtures, while only linear EtOH-CO(2) hydrogen-bonded and EDA-bonded dimers and trimers are present. Further analysis shows that EtOH-CO(2) EDA complex is more stable than the hydrogen-bonded one.
Mahdavi, Naser; Shamsaei, Mojtaba; Shafaei, Mostafa; Rabiei, Ali
2013-10-01
The objective of this study was to design a system in order to analyze gold and other heavy elements in internal organs using in vivo x-ray fluorescence (XRF) analysis. Monte Carlo N Particle code MCNP was used to simulate phantoms and sources. A source of 99mTc was simulated in kidney to excite the gold x-rays. Changes in K XRF response due to variations in tissue thickness overlying the kidney at the measurement site were investigated. Different simulations having tissue thicknesses of 20, 30, 40, 50 and 60 mm were performed. Kα1 and Kα2 for all depths were measured. The linearity of the XRF system was also studied by increasing the gold concentration in the kidney phantom from 0 to 500 µg g-1 kidney tissue. The results show that gold concentration between 3 and 10 µg g-1 kidney tissue can be detected for distance between the skin and the kidney surface of 20-60 mm. The study also made a comparison between the skin doses for the source outside and inside the phantom.
Monte Carlo analysis of a lateral IBIC experiment on a 4H-SiC Schottky diode
Energy Technology Data Exchange (ETDEWEB)
Olivero, P. [Experimental Physics Dept./NIS Excellence Centre, University of Torino, and INFN-Sez. di Torino via P. Giuria 1, 10125 Torino (Italy); Ruder Boskovic Institute, Bijenicka 54, P.O. Box 180, 10002 Zagreb (Croatia); Forneris, J.; Gamarra, P. [Experimental Physics Dept./NIS Excellence Centre, University of Torino, and INFN-Sez. di Torino via P. Giuria 1, 10125 Torino (Italy); Jaksic, M. [Ruder Boskovic Institute, Bijenicka 54, P.O. Box 180, 10002 Zagreb (Croatia); Lo Giudice, A.; Manfredotti, C. [Experimental Physics Dept./NIS Excellence Centre, University of Torino, and INFN-Sez. di Torino via P. Giuria 1, 10125 Torino (Italy); Pastuovic, Z.; Skukan, N. [Ruder Boskovic Institute, Bijenicka 54, P.O. Box 180, 10002 Zagreb (Croatia); Vittone, E., E-mail: ettore.vittone@unito.it [Experimental Physics Dept./NIS Excellence Centre, University of Torino, and INFN-Sez. di Torino via P. Giuria 1, 10125 Torino (Italy)
2011-10-15
The transport properties of a 4H-SiC Schottky diode have been investigated by the ion beam induced charge (IBIC) technique in lateral geometry through the analysis of the charge collection efficiency (CCE) profile at a fixed applied reverse bias voltage. The cross section of the sample orthogonal to the electrodes was irradiated by a rarefied 4 MeV proton microbeam and the charge pulses have been recorded as function of incident proton position with a spatial resolution of 2 {mu}m. The CCE profile shows a broad plateau with CCE values close to 100% occurring at the depletion layer, whereas in the neutral region, the exponentially decreasing profile indicates the dominant role played by the diffusion transport mechanism. Mapping of charge pulses was accomplished by a novel computational approach, which consists in mapping the Gunn's weighting potential by solving the electrostatic problem by finite element method and hence evaluating the induced charge at the sensing electrode by a Monte Carlo method. The combination of these two computational methods enabled an exhaustive interpretation of the experimental profiles and allowed an accurate evaluation both of the electrical characteristics of the active region (e.g. electric field profiles) and of basic transport parameters (i.e. diffusion length and minority carrier lifetime).
Monte Carlo analysis of a lateral IBIC experiment on a 4H-SiC Schottky diode
Olivero, P.; Forneris, J.; Gamarra, P.; Jakšić, M.; Giudice, A. Lo; Manfredotti, C.; Pastuović, Ž.; Skukan, N.; Vittone, E.
2011-10-01
The transport properties of a 4H-SiC Schottky diode have been investigated by the ion beam induced charge (IBIC) technique in lateral geometry through the analysis of the charge collection efficiency (CCE) profile at a fixed applied reverse bias voltage. The cross section of the sample orthogonal to the electrodes was irradiated by a rarefied 4 MeV proton microbeam and the charge pulses have been recorded as function of incident proton position with a spatial resolution of 2 μm. The CCE profile shows a broad plateau with CCE values close to 100% occurring at the depletion layer, whereas in the neutral region, the exponentially decreasing profile indicates the dominant role played by the diffusion transport mechanism. Mapping of charge pulses was accomplished by a novel computational approach, which consists in mapping the Gunn's weighting potential by solving the electrostatic problem by finite element method and hence evaluating the induced charge at the sensing electrode by a Monte Carlo method. The combination of these two computational methods enabled an exhaustive interpretation of the experimental profiles and allowed an accurate evaluation both of the electrical characteristics of the active region (e.g. electric field profiles) and of basic transport parameters (i.e. diffusion length and minority carrier lifetime).
Sweeney, Lisa M; Parker, Ann; Haber, Lynne T; Tran, C Lang; Kuempel, Eileen D
2013-06-01
A biomathematical model was previously developed to describe the long-term clearance and retention of particles in the lungs of coal miners. The model structure was evaluated and parameters were estimated in two data sets, one from the United States and one from the United Kingdom. The three-compartment model structure consists of deposition of inhaled particles in the alveolar region, competing processes of either clearance from the alveolar region or translocation to the lung interstitial region, and very slow, irreversible sequestration of interstitialized material in the lung-associated lymph nodes. Point estimates of model parameter values were estimated separately for the two data sets. In the current effort, Bayesian population analysis using Markov chain Monte Carlo simulation was used to recalibrate the model while improving assessments of parameter variability and uncertainty. When model parameters were calibrated simultaneously to the two data sets, agreement between the derived parameters for the two groups was very good, and the central tendency values were similar to those derived from the deterministic approach. These findings are relevant to the proposed update of the ICRP human respiratory tract model with revisions to the alveolar-interstitial region based on this long-term particle clearance and retention model.
Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.
2004-01-01
X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.
Monte Carlo analysis of a lateral IBIC experiment on a 4H-SiC Schottky diode
Olivero, P; Gamarra, P; Jaksic, M; Giudice, A Lo; Manfredotti, C; Pastuovic, Z; Skukan, N; Vittone, E
2016-01-01
The transport properties of a 4H-SiC Schottky diode have been investigated by the Ion Beam Induced Charge (IBIC) technique in lateral geometry through the analysis of the charge collection efficiency (CCE) profile at a fixed applied reverse bias voltage. The cross section of the sample orthogonal to the electrodes was irradiated by a rarefied 4 MeV proton microbeam and the charge pulses have been recorded as function of incident proton position with a spatial resolution of 2 um. The CCE profile shows a broad plateau with CCE values close to 100% occurring at the depletion layer, whereas in the neutral region, the exponentially decreasing profile indicates the dominant role played by the diffusion transport mechanism. Mapping of charge pulses was accomplished by a novel computational approach, which consists in mapping the Gunn's weighting potential by solving the electrostatic problem by finite element method and hence evaluating the induced charge at the sensing electrode by a Monte Carlo method. The combina...
Monte Carlo simulation of neutron scattering instruments
Energy Technology Data Exchange (ETDEWEB)
Seeger, P.A.
1995-12-31
A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width.
Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung
2016-01-01
Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The "missing heritability" has been suggested to be due to lack of studies focused on epistasis, also called gene-gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called "Epistasis Test in Meta-Analysis" (ETMA), which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene-gene interactions in the renin-angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST) mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/etma/index.html].
Directory of Open Access Journals (Sweden)
Chin Lin
Full Text Available Conventional genome-wide association studies (GWAS have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The "missing heritability" has been suggested to be due to lack of studies focused on epistasis, also called gene-gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called "Epistasis Test in Meta-Analysis" (ETMA, which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene-gene interactions in the renin-angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN at https://cran.r-project.org/web/packages/etma/index.html].
Dujko, S.; Ebert, U.; White, R.D.; Petrović, Z.L.
2010-01-01
A comprehensive investigation of electron transport in N$_{2}$-O$_{2}$ mixtures has been carried out using a multi term theory for solving the Boltzmann equation and Monte Carlo simulation technique instead of conventional two-term theory often employed in plasma modeling community. We focus on the
Maleka, PP; Maucec, M
2005-01-01
Monte Carlo method was used to simulate the pulse-height response function of high-precision germanium (HPGe) detector for photon energies below 1 MeV. The calculations address the uncertainty estimation due to inadequate specifications of source positioning and to variations in the detector's physi
Meta-Analysis of Single-Case Data: A Monte Carlo Investigation of a Three Level Model
Owens, Corina M.
2011-01-01
Numerous ways to meta-analyze single-case data have been proposed in the literature, however, consensus on the most appropriate method has not been reached. One method that has been proposed involves multilevel modeling. This study used Monte Carlo methods to examine the appropriateness of Van den Noortgate and Onghena's (2008) raw data multilevel…
Monte Carlo methods for electromagnetics
Sadiku, Matthew NO
2009-01-01
Until now, novices had to painstakingly dig through the literature to discover how to use Monte Carlo techniques for solving electromagnetic problems. Written by one of the foremost researchers in the field, Monte Carlo Methods for Electromagnetics provides a solid understanding of these methods and their applications in electromagnetic computation. Including much of his own work, the author brings together essential information from several different publications.Using a simple, clear writing style, the author begins with a historical background and review of electromagnetic theory. After addressing probability and statistics, he introduces the finite difference method as well as the fixed and floating random walk Monte Carlo methods. The text then applies the Exodus method to Laplace's and Poisson's equations and presents Monte Carlo techniques for handing Neumann problems. It also deals with whole field computation using the Markov chain, applies Monte Carlo methods to time-varying diffusion problems, and ...
Antanasijević, Davor; Pocajt, Viktor; Perić-Grujić, Aleksandra; Ristić, Mirjana
2014-11-01
This paper describes the training, validation, testing and uncertainty analysis of general regression neural network (GRNN) models for the forecasting of dissolved oxygen (DO) in the Danube River. The main objectives of this work were to determine the optimum data normalization and input selection techniques, the determination of the relative importance of uncertainty in different input variables, as well as the uncertainty analysis of model results using the Monte Carlo Simulation (MCS) technique. Min-max, median, z-score, sigmoid and tanh were validated as normalization techniques, whilst the variance inflation factor, correlation analysis and genetic algorithm were tested as input selection techniques. As inputs, the GRNN models used 19 water quality variables, measured in the river water each month at 17 different sites over a period of 9 years. The best results were obtained using min-max normalized data and the input selection based on the correlation between DO and dependent variables, which provided the most accurate GRNN model, and in combination the smallest number of inputs: Temperature, pH, HCO3-, SO42-, NO3-N, Hardness, Na, Cl-, Conductivity and Alkalinity. The results show that the correlation coefficient between measured and predicted DO values is 0.85. The inputs with the greatest effect on the GRNN model (arranged in descending order) were T, pH, HCO3-, SO42- and NO3-N. Of all inputs, variability of temperature had the greatest influence on the variability of DO content in river body, with the DO decreasing at a rate similar to the theoretical DO decreasing rate relating to temperature. The uncertainty analysis of the model results demonstrate that the GRNN can effectively forecast the DO content, since the distribution of model results are very similar to the corresponding distribution of real data.
Efficiency and accuracy of Monte Carlo (importance) sampling
Waarts, P.H.
2003-01-01
Monte Carlo Analysis is often regarded as the most simple and accurate reliability method. Be-sides it is the most transparent method. The only problem is the accuracy in correlation with the efficiency. Monte Carlo gets less efficient or less accurate when very low probabilities are to be computed
Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming
2016-07-01
Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model
Cezar, Henrique M.; Rondina, Gustavo G.; Da Silva, Juarez L. F.
2017-02-01
A basic requirement for an atom-level understanding of nanoclusters is the knowledge of their atomic structure. This understanding is incomplete if it does not take into account temperature effects, which play a crucial role in phase transitions and changes in the overall stability of the particles. Finite size particles present intricate potential energy surfaces, and rigorous descriptions of temperature effects are best achieved by exploiting extended ensemble algorithms, such as the Parallel Tempering Monte Carlo (PTMC). In this study, we employed the PTMC algorithm, implemented from scratch, to sample configurations of LJn (n =38 , 55, 98, 147) particles at a wide range of temperatures. The heat capacities and phase transitions obtained with our PTMC implementation are consistent with all the expected features for the LJ nanoclusters, e.g., solid to solid and solid to liquid. To identify the known phase transitions and assess the prevalence of various structural motifs available at different temperatures, we propose a combination of a Leader-like clustering algorithm based on a Euclidean metric with the PTMC sampling. This combined approach is further compared with the more computationally demanding bond order analysis, typically employed for this kind of problem. We show that the clustering technique yields the same results in most cases, with the advantage that it requires no previous knowledge of the parameters defining each geometry. Being simple to implement, we believe that this straightforward clustering approach is a valuable data analysis tool that can provide insights into the physics of finite size particles with few to thousand atoms at a relatively low cost.
Directory of Open Access Journals (Sweden)
TEMITOPE RAPHAEL AYODELE
2016-04-01
Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.
Metropolis Methods for Quantum Monte Carlo Simulations
Ceperley, D. M.
2003-01-01
Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...
The Dynamic Monte Carlo Method for Transient Analysis of Nuclear Reactors
Sjenitzer, B.L.
2013-01-01
In this thesis a new method for the analysis of power transients in a nuclear reactor is developed, which is more accurate than the present state-of-the-art methods. Transient analysis is important tool when designing nuclear reactors, since they predict the behaviour of a reactor during changing co
Monte Carlo analysis: error of extrapolated thermal conductivity from molecular dynamics simulations
Energy Technology Data Exchange (ETDEWEB)
Liu, Xiang-Yang [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Andersson, Anders David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-11-07
In this short report, we give an analysis of the extrapolated thermal conductivity of UO2 from earlier molecular dynamics (MD) simulations [1]. Because almost all material properties are functions of temperature, e.g. fission gas release, the fuel thermal conductivity is the most important parameter from a model sensitivity perspective [2]. Thus, it is useful to perform such analysis.
Risk analysis of gravity dam instability using credibility theory Monte Carlo simulation model
Xin, Cao; Chongshi, Gu
2016-01-01
Risk analysis of gravity dam stability involves complicated uncertainty in many design parameters and measured data. Stability failure risk ratio described jointly by probability and possibility has deficiency in characterization of influence of fuzzy factors and representation of the likelihood of risk occurrence in practical engineering. In this article, credibility theory is applied into stability failure risk analysis of gravity dam. Stability of gravity dam is viewed as a hybrid event co...
DEFF Research Database (Denmark)
Salling, Kim Bang; Leleur, Steen
2007-01-01
This paper presents an appraisal study of three different airport proposals in Greenland by the use of an adapted version of the Danish CBA-DK model. The assessment model is based on both a deterministic calculation by the use of conventional cost-benefit analysis and a stochastic calculation...
Monte Carlo Analysis of Airport Throughput and Traffic Delays Using Self Separation Procedures
Consiglio, Maria C.; Sturdy, James L.
2006-01-01
This paper presents the results of three simulation studies of throughput and delay times of arrival and departure operations performed at non-towered, non-radar airports using self-separation procedures. The studies were conducted as part of the validation process of the Small Aircraft Transportation Systems Higher Volume Operations (SATS HVO) concept and include an analysis of the predicted airport capacity using with different traffic conditions and system constraints under increasing levels of demand. Results show that SATS HVO procedures can dramatically increase capacity at non-towered, non-radar airports and that the concept offers the potential for increasing capacity of the overall air transportation system.
Composite analysis with Monte Carlo methods: an example with cosmic rays and clouds
Laken, Benjamin A
2013-01-01
The composite (superposed epoch) analysis technique has been frequently employed to examine a hypothesized link between solar activity and the Earth's atmosphere, often through an investigation of Forbush decrease (Fd) events (sudden high-magnitude decreases in the flux cosmic rays impinging on the upper-atmosphere lasting up to several days). This technique is useful for isolating low-amplitude signals within data where background variability would otherwise obscure detection. The application of composite analyses to investigate the possible impacts of Fd events involves a statistical examination of time-dependent atmospheric responses to Fds often from aerosol and/or cloud datasets. Despite the publication of numerous results within this field, clear conclusions have yet to be drawn and much ambiguity and disagreement still remain. In this paper, we argue that the conflicting findings of composite studies within this field relate to methodological differences in the manner in which the composites have been ...
Composite analysis with Monte Carlo methods: an example with cosmic rays and clouds
Directory of Open Access Journals (Sweden)
Laken B.A.
2013-09-01
Full Text Available The composite (superposed epoch analysis technique has been frequently employed to examine a hypothesized link between solar activity and the Earth’s atmosphere, often through an investigation of Forbush decrease (Fd events (sudden high-magnitude decreases in the flux cosmic rays impinging on the upper-atmosphere lasting up to several days. This technique is useful for isolating low-amplitude signals within data where background variability would otherwise obscure detection. The application of composite analyses to investigate the possible impacts of Fd events involves a statistical examination of time-dependent atmospheric responses to Fds often from aerosol and/or cloud datasets. Despite the publication of numerous results within this field, clear conclusions have yet to be drawn and much ambiguity and disagreement still remain. In this paper, we argue that the conflicting findings of composite studies within this field relate to methodological differences in the manner in which the composites have been constructed and analyzed. Working from an example, we show how a composite may be objectively constructed to maximize signal detection, robustly identify statistical significance, and quantify the lower-limit uncertainty related to hypothesis testing. Additionally, we also demonstrate how a seemingly significant false positive may be obtained from non-significant data by minor alterations to methodological approaches.
Acoustic effects analysis utilizing speckle pattern with fixed-particle Monte Carlo
Vakili, Ali; Hollmann, Joseph A.; Holt, R. Glynn; DiMarzio, Charles A.
2016-03-01
Optical imaging in a turbid medium is limited because of multiple scattering a photon undergoes while traveling through the medium. Therefore, optical imaging is unable to provide high resolution information deep in the medium. In the case of soft tissue, acoustic waves unlike light, can travel through the medium with negligible scattering. However, acoustic waves cannot provide medically relevant contrast as good as light. Hybrid solutions have been applied to use the benefits of both imaging methods. A focused acoustic wave generates a force inside an acoustically absorbing medium known as acoustic radiation force (ARF). ARF induces particle displacement within the medium. The amount of displacement is a function of mechanical properties of the medium and the applied force. To monitor the displacement induced by the ARF, speckle pattern analysis can be used. The speckle pattern is the result of interfering optical waves with different phases. As light travels through the medium, it undergoes several scattering events. Hence, it generates different scattering paths which depends on the location of the particles. Light waves that travel along these paths have different phases (different optical path lengths). ARF induces displacement to scatterers within the acoustic focal volume, and changes the optical path length. In addition, temperature rise due to conversion of absorbed acoustic energy to heat, changes the index of refraction and therefore, changes the optical path length of the scattering paths. The result is a change in the speckle pattern. Results suggest that the average change in the speckle pattern measures the displacement of particles and temperature rise within the acoustic wave focal area, hence can provide mechanical and thermal properties of the medium.
SU-E-T-761: TOMOMC, A Monte Carlo-Based Planning VerificationTool for Helical Tomotherapy
Energy Technology Data Exchange (ETDEWEB)
Chibani, O; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)
2015-06-15
Purpose: Present a new Monte Carlo code (TOMOMC) to calculate 3D dose distributions for patients undergoing helical tomotherapy treatments. TOMOMC performs CT-based dose calculations using the actual dynamic variables of the machine (couch motion, gantry rotation, and MLC sequences). Methods: TOMOMC is based on the GEPTS (Gama Electron and Positron Transport System) general-purpose Monte Carlo system (Chibani and Li, Med. Phys. 29, 2002, 835). First, beam models for the Hi-Art Tomotherpy machine were developed for the different beam widths (1, 2.5 and 5 cm). The beam model accounts for the exact geometry and composition of the different components of the linac head (target, primary collimator, jaws and MLCs). The beams models were benchmarked by comparing calculated Pdds and lateral/transversal dose profiles with ionization chamber measurements in water. See figures 1–3. The MLC model was tuned in such a way that tongue and groove effect, inter-leaf and intra-leaf transmission are modeled correctly. See figure 4. Results: By simulating the exact patient anatomy and the actual treatment delivery conditions (couch motion, gantry rotation and MLC sinogram), TOMOMC is able to calculate the 3D patient dose distribution which is in principal more accurate than the one from the treatment planning system (TPS) since it relies on the Monte Carlo method (gold standard). Dose volume parameters based on the Monte Carlo dose distribution can also be compared to those produced by the TPS. Attached figures show isodose lines for a H&N patient calculated by TOMOMC (transverse and sagittal views). Analysis of differences between TOMOMC and TPS is an ongoing work for different anatomic sites. Conclusion: A new Monte Carlo code (TOMOMC) was developed for Tomotherapy patient-specific QA. The next step in this project is implementing GPU computing to speed up Monte Carlo simulation and make Monte Carlo-based treatment verification a practical solution.
SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON
Energy Technology Data Exchange (ETDEWEB)
Goluoglu, Sedat [ORNL; Bekar, Kursat B [ORNL; Wiarda, Dorothea [ORNL
2012-01-01
The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.
Monte Carlo integration on GPU
Kanzaki, J.
2010-01-01
We use a graphics processing unit (GPU) for fast computations of Monte Carlo integrations. Two widely used Monte Carlo integration programs, VEGAS and BASES, are parallelized on GPU. By using $W^{+}$ plus multi-gluon production processes at LHC, we test integrated cross sections and execution time for programs in FORTRAN and C on CPU and those on GPU. Integrated results agree with each other within statistical errors. Execution time of programs on GPU run about 50 times faster than those in C...
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Energy Technology Data Exchange (ETDEWEB)
Bankovic, A., E-mail: ana.bankovic@gmail.com [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Dujko, S. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Centrum Wiskunde and Informatica (CWI), P.O. Box 94079, 1090 GB Amsterdam (Netherlands); ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); White, R.D. [ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); Buckman, S.J. [ARC Centre for Antimatter-Matter Studies, Australian National University, Canberra, ACT 0200 (Australia); Petrovic, Z.Lj. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia)
2012-05-15
This work reports on a new series of calculations of positron transport properties in molecular hydrogen under the influence of spatially homogeneous electric field. Calculations are performed using a Monte Carlo simulation technique and multi term theory for solving the Boltzmann equation. Values and general trends of the mean energy, drift velocity and diffusion coefficients as a function of the reduced electric field E/n{sub 0} are reported here. Emphasis is placed on the explicit and implicit effects of positronium (Ps) formation on the drift velocity and diffusion coefficients. Two important phenomena arise; first, for certain regions of E/n{sub 0} the bulk and flux components of the drift velocity and longitudinal diffusion coefficient are markedly different, both qualitatively and quantitatively. Second, and contrary to previous experience in electron swarm physics, there is negative differential conductivity (NDC) effect in the bulk drift velocity component with no indication of any NDC for the flux component. In order to understand this atypical manifestation of the drift and diffusion of positrons in H{sub 2} under the influence of electric field, the spatially dependent positron transport properties such as number of positrons, average energy and velocity and spatially resolved rate for Ps formation are calculated using a Monte Carlo simulation technique. The spatial variation of the positron average energy and extreme skewing of the spatial profile of positron swarm are shown to play a central role in understanding the phenomena.
Equilibrium Statistics: Monte Carlo Methods
Kröger, Martin
Monte Carlo methods use random numbers, or ‘random’ sequences, to sample from a known shape of a distribution, or to extract distribution by other means. and, in the context of this book, to (i) generate representative equilibrated samples prior being subjected to external fields, or (ii) evaluate high-dimensional integrals. Recipes for both topics, and some more general methods, are summarized in this chapter. It is important to realize, that Monte Carlo should be as artificial as possible to be efficient and elegant. Advanced Monte Carlo ‘moves’, required to optimize the speed of algorithms for a particular problem at hand, are outside the scope of this brief introduction. One particular modern example is the wavelet-accelerated MC sampling of polymer chains [406].
Monte Carlo simulation for the transport beamline
Energy Technology Data Exchange (ETDEWEB)
Romano, F.; Cuttone, G.; Jia, S. B.; Varisano, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania (Italy); Attili, A.; Marchetto, F.; Russo, G. [INFN, Sezione di Torino, Via P.Giuria, 1 10125 Torino (Italy); Cirrone, G. A. P.; Schillaci, F.; Scuderi, V. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Institute of Physics Czech Academy of Science, ELI-Beamlines project, Na Slovance 2, Prague (Czech Republic); Carpinelli, M. [INFN Sezione di Cagliari, c/o Dipartimento di Fisica, Università di Cagliari, Cagliari (Italy); Tramontana, A. [INFN, Laboratori Nazionali del Sud, Via Santa Sofia 62, Catania, Italy and Università di Catania, Dipartimento di Fisica e Astronomia, Via S. Sofia 64, Catania (Italy)
2013-07-26
In the framework of the ELIMED project, Monte Carlo (MC) simulations are widely used to study the physical transport of charged particles generated by laser-target interactions and to preliminarily evaluate fluence and dose distributions. An energy selection system and the experimental setup for the TARANIS laser facility in Belfast (UK) have been already simulated with the GEANT4 (GEometry ANd Tracking) MC toolkit. Preliminary results are reported here. Future developments are planned to implement a MC based 3D treatment planning in order to optimize shots number and dose delivery.
MODELING LEACHING OF VIRUSES BY THE MONTE CARLO METHOD
A predictive screening model was developed for fate and transport of viruses in the unsaturated zone. A database of input parameters allowed Monte Carlo analysis with the model. The resulting kernel densities of predicted attenuation during percolation indicated very ...
Monte Carlo Hamiltonian: Linear Potentials
Institute of Scientific and Technical Information of China (English)
LUO Xiang-Qian; LIU Jin-Jiang; HUANG Chun-Qing; JIANG Jun-Qin; Helmut KROGER
2002-01-01
We further study the validity of the Monte Carlo Hamiltonian method. The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach, is its capability to study the excited states. Weconsider two quantum mechanical models: a symmetric one V(x) = |x|/2; and an asymmetric one V(x) = ∞, forx ＜ 0 and V(x) = x, for x ≥ 0. The results for the spectrum, wave functions and thermodynamical observables are inagreement with the analytical or Runge-Kutta calculations.
Proton Upset Monte Carlo Simulation
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Buyukada, Musa
2017-02-01
The aim of present study is to investigate the thermogravimetric behaviour of the co-combustion of hazelnut hull (HH) and coal blends using three approaches: multi non-linear regression (MNLR) modeling based on Box-Behnken design (BBD) (1), optimization based on response surface methodology (RSM) (2), and probabilistic uncertainty analysis based on Monte Carlo simulation as a function of blend ratio, heating rate, and temperature (3). The response variable was predicted by the best-fit MNLR model with a predicted regression coefficient (R(2)pred) of 99.5%. Blend ratio of 90/10 (HH to coal, %wt), temperature of 405°C, and heating rate of 44°Cmin(-1) were determined as RSM-optimized conditions with a mass loss of 87.4%. The validation experiments with three replications were performed for justifying the predicted-mass loss percentage and 87.5%±0.2 of mass loss were obtained under RSM-optimized conditions. The probabilistic uncertainty analysis were performed by using Monte Carlo simulations.
Institute of Scientific and Technical Information of China (English)
刘大旭
2015-01-01
传统的碳排放价格预测和收益率分析，大多数采用计量经济学方法，对样本数据的真实性和样本数量有着极高的要求。由于 Monte Carlo 算法具有适合小样本数据量分析和预测精度高的优点，将其用于碳排放价格预测和收益率分析。通过维纳运动方程构建出数学模型，之后结合 Monte Carlo 算法，模拟维纳运动过程的变化规律，从而实现碳排放价格的预测。通过 JB 统计量检验和 VAR 检验表明，针对小样本碳排放价格数据，改进的 Monte Carlo 算法具有更高的预测精度，对碳排放价格的描述和刻画能力更强，可以用于为碳排放期货交易和价格确定提供科学决策以及为其他经济领域和方向的定量描述和预测提供依据。%In tradition, most of econometric methods are to be analyzed the carbon price prediction and earnings, but they have a high requirements for the truth and quantity of samples. Monte Carlo Algorithm takes advantage of the high prediction accuracy to suit for analyzing small samples in carbon emission prediction and earning analysis. This paper established the mathematical model based on Wiener motion equations and then combined Monte Carlo Algorithm to simulate the variation of the Wiener movement in order to achieve a carbon price prediction. The results of JB statistics and VAR test showed that improved Monte Carlo Algorithm had a higher prediction accuracy and more stronger in carbon price description and characterization. It was applied to provide a scientific basis for decision-making futures trading and price carbon emissions, it could be used to quantitatively describe and predict the rest of the economy and the direction.
Monte Carlo Simulation for Particle Detectors
Pia, Maria Grazia
2012-01-01
Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...
Gukelberger, Jan; Hafermann, Hartmut
2016-01-01
The dual-fermion approach provides a formally exact prescription for calculating properties of a correlated electron system in terms of a diagrammatic expansion around dynamical mean-field theory (DMFT). It can address the full range of interactions, the lowest order theory is asymptotically exact in both the weak- and strong-coupling limits, and the technique naturally incorporates long-range correlations beyond the reach of current cluster extensions to DMFT. Most practical implementations, however, neglect higher-order interaction vertices beyond two-particle scattering in the dual effective action and further truncate the diagrammatic expansion in the two-particle scattering vertex to a leading-order or ladder-type approximation. In this work we compute the dual-fermion expansion for the Hubbard model including all diagram topologies with two-particle interactions to high orders by means of a stochastic diagrammatic Monte Carlo algorithm. We use benchmarking against numerically exact Diagrammatic Determin...
He, Lijuan; Li, Weimin; Chen, Zhi; Chen, Yukai; Ren, Guangyi
2014-01-01
The 200-MeV electron linac of the National Synchrotron Radiation Laboratory (NSRL) located in Hefei is one of the earliest high-energy electron linear accelerators in China. The electrons are accelerated to 200 MeV by five acceleration tubes and are collimated by scrapers. The scraper aperture is smaller than the acceleration tube one, so some electrons hit the materials when passing through them. These lost electrons cause induced radioactivity mainly due to bremsstrahlung and photonuclear reaction. This paper describes a study of induced radioactivity for the NSRL Linac using FLUKA simulations and gamma-spectroscopy. The measurements showed that electrons were lost mainly at the scraper. So the induced radioactivity of the NSRL Linac is mainly produced here. The radionuclide types were simulated using the FLUKA Monte Carlo code and the results were compared against measurements made with a High Purity Germanium (HPGe) gamma spectrometer. The NSRL linac had been retired because of upgrading last year. The re...
Energy Technology Data Exchange (ETDEWEB)
Jang, Dong Gun [Dept. of Nuclear Medicine, Dongnam Institute of Radiological and Medical Sciences Cancer Center, Pusan (Korea, Republic of); Kang, SeSik; Kim, Jung Hoon; KIm, Chang Soo [Dept. of Radiological Science, College of Health Sciences, Catholic University, Pusan (Korea, Republic of)
2015-12-15
Workers in nuclear medicine have performed various tasks such as production, distribution, preparation and injection of radioisotope. This process could cause high radiation exposure to workers’ hand. The purpose of this study was to investigate shielding effect for r-rays of 140 and 511 keV by using Monte-Carlo simulation. As a result, it was effective, regardless of lead thickness for radiation shielding in 140 keV r-ray. However, it was effective in shielding material with thickness of more than only 1.1 mm in 511 keV r-ray. And also it doesn’t effective in less than 1.1 mm due to secondary scatter ray and exposure dose was rather increased. Consequently, energy of radionuclide and thickness of shielding materials should be considered to reduce radiation exposure.
Monte Carlo Particle Lists: MCPL
Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi
2016-01-01
A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.
Monte-Carlo simulation-based statistical modeling
Chen, John
2017-01-01
This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.
Applications of Monte Carlo Methods in Calculus.
Gordon, Sheldon P.; Gordon, Florence S.
1990-01-01
Discusses the application of probabilistic ideas, especially Monte Carlo simulation, to calculus. Describes some applications using the Monte Carlo method: Riemann sums; maximizing and minimizing a function; mean value theorems; and testing conjectures. (YP)
SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations
Baes, Maarten
2015-01-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can...
Energy Technology Data Exchange (ETDEWEB)
Bottaini, C. [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Mirão, J. [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Évora Geophysics Centre, Rua Romão Ramalho 59, 7000 Évora (Portugal); Figuereido, M. [Archaeologist — Monte da Capelinha, Apartado 54, 7005, São Miguel de Machede, Évora (Portugal); Candeias, A. [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Évora Chemistry Centre, Rua Romão Ramalho 59, 7000 Évora (Portugal); Brunetti, A. [Department of Political Science and Communication, University of Sassari, Via Piandanna 2, 07100 Sassari (Italy); Schiavon, N., E-mail: schiavon@uevora.pt [Hercules Laboratory, University of Évora, Palacio do Vimioso, Largo Marquês de Marialva 8, 7000-809 Évora (Portugal); Évora Geophysics Centre, Rua Romão Ramalho 59, 7000 Évora (Portugal)
2015-01-01
Energy dispersive X-ray fluorescence (EDXRF) is a well-known technique for non-destructive and in situ analysis of archaeological artifacts both in terms of the qualitative and quantitative elemental composition because of its rapidity and non-destructiveness. In this study EDXRF and realistic Monte Carlo simulation using the X-ray Monte Carlo (XRMC) code package have been combined to characterize a Cu-based bowl from the Iron Age burial from Fareleira 3 (Southern Portugal). The artifact displays a multilayered structure made up of three distinct layers: a) alloy substrate; b) green oxidized corrosion patina; and c) brownish carbonate soil-derived crust. To assess the reliability of Monte Carlo simulation in reproducing the composition of the bulk metal of the objects without recurring to potentially damaging patina's and crust's removal, portable EDXRF analysis was performed on cleaned and patina/crust coated areas of the artifact. Patina has been characterized by micro X-ray Diffractometry (μXRD) and Back-Scattered Scanning Electron Microscopy + Energy Dispersive Spectroscopy (BSEM + EDS). Results indicate that the EDXRF/Monte Carlo protocol is well suited when a two-layered model is considered, whereas in areas where the patina + crust surface coating is too thick, X-rays from the alloy substrate are not able to exit the sample. - Highlights: • EDXRF/Monte Carlo simulation is used to characterize an archeological alloy. • EDXRF analysis was performed on cleaned and patina coated areas of the artifact. • EDXRF/Montes Carlo protocol is well suited when a two-layered model is considered. • When the patina is too thick, X-rays from substrate are unable to exit the sample.
,
2015-01-01
We present a sophisticated likelihood reconstruction algorithm for shower-image analysis of imaging Cherenkov telescopes. The reconstruction algorithm is based on the comparison of the camera pixel amplitudes with the predictions from a Monte Carlo based model. Shower parameters are determined by a maximisation of a likelihood function. Maximisation of the likelihood as a function of shower fit parameters is performed using a numerical non-linear optimisation technique. A related reconstruction technique has already been developed by the CAT and the H.E.S.S. experiments, and provides a more precise direction and energy reconstruction of the photon induced shower compared to the second moment of the camera image analysis. Examples are shown of the performance of the analysis on simulated gamma-ray data from the VERITAS array.
Chen, Yanping; Chen, Yisha; Yan, Huangping; Wang, Xiaoling
2017-01-01
Early detection of knee osteoarthritis (KOA) is meaningful to delay or prevent the onset of osteoarthritis. In consideration of structural complexity of knee joint, position of light incidence and detector appears to be extremely important in optical inspection. In this paper, the propagation of 780-nm near infrared photons in three-dimensional knee joint model is simulated by Monte Carlo (MC) method. Six light incident locations are chosen in total to analyze the influence of incident and detecting location on the number of detected signal photons and signal to noise ratio (SNR). Firstly, a three-dimensional photon propagation model of knee joint is reconstructed based on CT images. Then, MC simulation is performed to study the propagation of photons in three-dimensional knee joint model. Photons which finally migrate out of knee joint surface are numerically analyzed. By analyzing the number of signal photons and SNR from the six given incident locations, the optimal incident and detecting location is defined. Finally, a series of phantom experiments are conducted to verify the simulation results. According to the simulation and phantom experiments results, the best incident location is near the right side of meniscus at the rear end of left knee joint and the detector is supposed to be set near patella, correspondingly.
A Monte Carlo Analysis of Weight Data from UF_{6} Cylinder Feed and Withdrawal Stations
Energy Technology Data Exchange (ETDEWEB)
Garner, James R [ORNL; Whitaker, J Michael [ORNL
2015-01-01
As the number of nuclear facilities handling uranium hexafluoride (UF_{6}) cylinders (e.g., UF_{6} production, enrichment, and fuel fabrication) increase in number and throughput, more automated safeguards measures will likely be needed to enable the International Atomic Energy Agency (IAEA) to achieve its safeguards objectives in a fiscally constrained environment. Monitoring the process data from the load cells built into the cylinder feed and withdrawal (F/W) stations (i.e., cylinder weight data) can significantly increase the IAEA’s ability to efficiently achieve the fundamental safeguards task of confirming operations as declared (i.e., no undeclared activities). Researchers at the Oak Ridge National Laboratory, Los Alamos National Laboratory, the Joint Research Center (in Ispra, Italy), and University of Glasgow are investigating how this weight data can be used for IAEA safeguards purposes while fully protecting the operator’s proprietary and sensitive information related to operations. A key question that must be resolved is, what is the necessary frequency of recording data from the process F/W stations to achieve safeguards objectives? This paper summarizes Monte Carlo simulations of typical feed, product, and tails withdrawal cycles and evaluates longer sampling frequencies to determine the expected errors caused by low-frequency sampling and its impact on material balance calculations.
Energy Technology Data Exchange (ETDEWEB)
Han, Gi Young; Seo, Bo Kyun [Korea Institute of Nuclear Safety,, Daejeon (Korea, Republic of); Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of); Sun, Gwang Min [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-06-15
In analyzing residual radiation, researchers generally use a two-step Monte Carlo (MC) simulation. The first step (MC1) simulates neutron transport, and the second step (MC2) transports the decay photons emitted from the activated materials. In this process, the stochastic uncertainty estimated by the MC2 appears only as a final result, but it is underestimated because the stochastic error generated in MC1 cannot be directly included in MC2. Hence, estimating the true stochastic uncertainty requires quantifying the propagation degree of the stochastic error in MC1. The brute force technique is a straightforward method to estimate the true uncertainty. However, it is a costly method to obtain reliable results. Another method, called the adjoint-based method, can reduce the computational time needed to evaluate the true uncertainty; however, there are limitations. To address those limitations, we propose a new strategy to estimate uncertainty propagation without any additional calculations in two-step MC simulations. To verify the proposed method, we applied it to activation benchmark problems and compared the results with those of previous methods. The results show that the proposed method increases the applicability and user-friendliness preserving accuracy in quantifying uncertainty propagation. We expect that the proposed strategy will contribute to efficient and accurate two-step MC calculations.
Density matrix quantum Monte Carlo
Blunt, N S; Spencer, J S; Foulkes, W M C
2013-01-01
This paper describes a quantum Monte Carlo method capable of sampling the full density matrix of a many-particle system, thus granting access to arbitrary reduced density matrices and allowing expectation values of complicated non-local operators to be evaluated easily. The direct sampling of the density matrix also raises the possibility of calculating previously inaccessible entanglement measures. The algorithm closely resembles the recently introduced full configuration interaction quantum Monte Carlo method, but works all the way from infinite to zero temperature. We explain the theory underlying the method, describe the algorithm, and introduce an importance-sampling procedure to improve the stochastic efficiency. To demonstrate the potential of our approach, the energy and staggered magnetization of the isotropic antiferromagnetic Heisenberg model on small lattices and the concurrence of one-dimensional spin rings are compared to exact or well-established results. Finally, the nature of the sign problem...
Efficient kinetic Monte Carlo simulation
Schulze, Tim P.
2008-02-01
This paper concerns kinetic Monte Carlo (KMC) algorithms that have a single-event execution time independent of the system size. Two methods are presented—one that combines the use of inverted-list data structures with rejection Monte Carlo and a second that combines inverted lists with the Marsaglia-Norman-Cannon algorithm. The resulting algorithms apply to models with rates that are determined by the local environment but are otherwise arbitrary, time-dependent and spatially heterogeneous. While especially useful for crystal growth simulation, the algorithms are presented from the point of view that KMC is the numerical task of simulating a single realization of a Markov process, allowing application to a broad range of areas where heterogeneous random walks are the dominate simulation cost.
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation
Energy Technology Data Exchange (ETDEWEB)
Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)
2014-10-15
3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.
Energy Technology Data Exchange (ETDEWEB)
Hernandez A, P. L.; Medina C, D.; Rodriguez I, J. L.; Salas L, M. A.; Vega C, H. R., E-mail: pabloyae_2@hotmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)
2015-10-15
The problems associated with insecurity and terrorism have forced to designing systems for detecting nuclear materials, drugs and explosives that are installed on roads, ports and airports. Organic materials are composed of C, H, O and N; similarly the explosive materials are manufactured which can be distinguished by the concentration of these elements. Its elemental composition, particularly the concentration of hydrogen and oxygen, allow distinguish them from other organic substances. When these materials are irradiated with neutrons nuclear reactions (n, γ) are produced, where the emitted photons are ready gamma rays whose energy is characteristic of each element and its abundance allows estimating their concentration. The aim of this study was designed using Monte Carlo methods a system with neutron source, gamma rays detector and moderator able to distinguish the presence of Rdx and urea. In design were used as moderators: paraffin, light water, polyethylene and graphite; as detectors were used HPGe and the NaI(Tl). The design that showed the best performance was the moderator of light water and HPGe, with a source of {sup 241}AmBe. For this design, the values of ambient dose equivalent around the system were calculated. (Author)
Monte Carlo Techniques for Nuclear Systems - Theory Lectures
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.
2016-11-29
These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
Geometric Monte Carlo and Black Janus Geometries
Bak, Dongsu; Kim, Kyung Kiu; Min, Hyunsoo; Song, Jeong-Pil
2016-01-01
We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.
Monte Carlo approach to turbulence
Energy Technology Data Exchange (ETDEWEB)
Dueben, P.; Homeier, D.; Muenster, G. [Muenster Univ. (Germany). Inst. fuer Theoretische Physik; Jansen, K. [DESY, Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Mesterhazy, D. [Humboldt Univ., Berlin (Germany). Inst. fuer Physik
2009-11-15
The behavior of the one-dimensional random-force-driven Burgers equation is investigated in the path integral formalism on a discrete space-time lattice. We show that by means of Monte Carlo methods one may evaluate observables, such as structure functions, as ensemble averages over different field realizations. The regularization of shock solutions to the zero-viscosity limit (Hopf-equation) eventually leads to constraints on lattice parameters required for the stability of the simulations. Insight into the formation of localized structures (shocks) and their dynamics is obtained. (orig.)
High Fidelity Imaging Algorithm for the Undique Imaging Monte Carlo Simulator
Directory of Open Access Journals (Sweden)
Tremblay Grégoire
2016-01-01
Full Text Available The Undique imaging Monte Carlo simulator (Undique hereafter was developed to reproduce the behavior of 3D imaging devices. This paper describes its high fidelity imaging algorithm.
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
Approaching Chemical Accuracy with Quantum Monte Carlo
Petruzielo, Frank R.; Toulouse, Julien; Umrigar, C. J.
2012-01-01
International audience; A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreem...
Benchmarking of Proton Transport in Super Monte Carlo Simulation Program
Wang, Yongfeng; Li, Gui; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Wu, Yican
2014-06-01
The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been intergrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with exitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to treat the intermediate energy nuclear
1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO
Energy Technology Data Exchange (ETDEWEB)
T. EVANS; ET AL
2000-08-01
We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.
Monte Carlo Treatment Planning for Advanced Radiotherapy
DEFF Research Database (Denmark)
Cronholm, Rickard
and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...... previous algorithms since it uses delineations of structures in order to include and/or exclude certain media in various anatomical regions. This method has the potential to reduce anatomically irrelevant media assignment. In house MATLAB scripts translating the treatment plan parameters to Monte Carlo...
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
Kleiss, R. H. P.; Lazopoulos, A.
2006-01-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction o...
THE APPLICATION OF MONTE CARLO SIMULATION FOR A DECISION PROBLEM
Directory of Open Access Journals (Sweden)
Çiğdem ALABAŞ
2001-01-01
Full Text Available The ultimate goal of the standard decision tree approach is to calculate the expected value of a selected performance measure. In the real-world situations, the decision problems become very complex as the uncertainty factors increase. In such cases, decision analysis using standard decision tree approach is not useful. One way of overcoming this difficulty is the Monte Carlo simulation. In this study, a Monte Carlo simulation model is developed for a complex problem and statistical analysis is performed to make the best decision.
Cosmological Markov Chain Monte Carlo simulation with Cmbeasy
Müller, C M
2004-01-01
We introduce a Markov Chain Monte Carlo simulation and data analysis package for the cosmological computation package Cmbeasy. We have taken special care in implementing an adaptive step algorithm for the Markov Chain Monte Carlo in order to improve convergence. Data analysis routines are provided which allow to test models of the Universe against up-to-date measurements of the Cosmic Microwave Background, Supernovae Ia and Large Scale Structure. The observational data is provided with the software for convenient usage. The package is publicly available as part of the Cmbeasy software at www.cmbeasy.org.
Reyhancan, Iskender Atilla; Ebrahimi, Alborz; Çolak, Üner; Erduran, M. Nizamettin; Angin, Nergis
2017-01-01
A new Monte-Carlo Library Least Square (MCLLS) approach for treating non-linear radiation analysis problem in Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) was developed. 14 MeV neutrons were produced by a neutron generator via the 3H (2H , n) 4He reaction. The prompt gamma ray spectra from bulk samples of seven different materials were measured by a Bismuth Germanate (BGO) gamma detection system. Polyethylene was used as neutron moderator along with iron and lead as neutron and gamma ray shielding, respectively. The gamma detection system was equipped with a list mode data acquisition system which streams spectroscopy data directly to the computer, event-by-event. A GEANT4 simulation toolkit was used for generating the single-element libraries of all the elements of interest. These libraries were then used in a Linear Library Least Square (LLLS) approach with an unknown experimental sample spectrum to fit it with the calculated elemental libraries. GEANT4 simulation results were also used for the selection of the neutron shielding material.
Pineda Rojas, Andrea L.; Venegas, Laura E.; Mazzeo, Nicolás A.
2016-09-01
A simple urban air quality model [MODelo de Dispersión Atmosférica Ubana - Generic Reaction Set (DAUMOD-GRS)] was recently developed. One-hour peak O3 concentrations in the Metropolitan Area of Buenos Aires (MABA) during the summer estimated with the DAUMOD-GRS model have shown values lower than 20 ppb (the regional background concentration) in the urban area and levels greater than 40 ppb in its surroundings. Due to the lack of measurements outside the MABA, these relatively high ozone modelled concentrations constitute the only estimate for the area. In this work, a methodology based on the Monte Carlo analysis is implemented to evaluate the uncertainty in these modelled concentrations associated to possible errors of the model input data. Results show that the larger 1-h peak O3 levels in the MABA during the summer present larger uncertainties (up to 47 ppb). On the other hand, multiple linear regression analysis is applied at selected receptors in order to identify the variables explaining most of the obtained variance. Although their relative contributions vary spatially, the uncertainty of the regional background O3 concentration dominates at all the analysed receptors (34.4-97.6%), indicating that their estimations could be improved to enhance the ability of the model to simulate peak O3 concentrations in the MABA.
Van Der Perk, Marcel; Burema, Jiske; Vandenhove, Hildegarde; Goor, François; Timofeyev, Sergei
2004-09-01
A Monte Carlo analysis of two sequential GIS-embedded submodels, which evaluate the economic feasibility of short rotation coppice (SRC) production and energy conversion in areas contaminated by Chernobyl-derived (137)Cs, was performed to allow for variability of environmental conditions that was not contained in the spatial model inputs. The results from this analysis were compared to the results from the deterministic model presented in part I of this paper. It was concluded that, although the variability in the model results due to within-gridcell variability of the model inputs was considerable, the prediction of the areas where SRC and energy conversion is potentially profitable was robust. If the additional variability in the model input that is not contained in the input maps is also taken into account, the SRC production and energy conversion appears to be potentially profitable at more locations for both the small scale and large scale production scenarios than the model predicted using the deterministic model.
Communication Network Reliability Analysis Monte Carlo method --Status and Prospect%蒙特·卡罗方法的现状和展望
Institute of Scientific and Technical Information of China (English)
王建秋
2011-01-01
Communication network, transmission network, integrated circuit network, transport network has now been throughout all aspects of social life, their reliability is related to the beneficial to the people＇s livelihood, their reliability research are very important. Due to the complexity of communication network, the communication network reliability analysis system has the quite great difficulty, so this paper Monte Carlo method, the communication network system reliability analysis research.%通信网络、输电网络、集成电路网络、交通网络等网络现今已遍布社会生活的各个方面，它们的可靠性关系到国计民生，对它们可靠性研究有十分重要的意义。由于通信网络等的复杂度。对通信网络的系统可靠性分析具有相当大的难度，所以本文蒙特·卡罗方法对通信网络系统可靠性分析进行了深入的研究。
Bonavita, M; Desidera, S; Gratton, R; Janson, M; Beuzit, J L; Kasper, M; Mordasini, C
2011-01-01
The high number of planet discoveries made in the last years provides a good sample for statistical analysis, leading to some clues on the distributions of planet parameters, like masses and periods, at least in close proximity to the host star. We likely need to wait for the extremely large telescopes (ELTs) to have an overall view of the extrasolar planetary systems. In this context it would be useful to have a tool that can be used for the interpretation of the present results,and also to predict what the outcomes would be of the future instruments. For this reason we built MESS: a Monte Carlo simulation code which uses either the results of the statistical analysis of the properties of discovered planets, or the results of the planet formation theories, to build synthetic planet populations fully described in terms of frequency, orbital elements and physical properties. They can then be used to either test the consistency of their properties with the observed population of planets given different detectio...
Kusoglu Sarikaya, C.; Rafatov, I.; Kudryavtsev, A. A.
2016-06-01
The work deals with the Particle in Cell/Monte Carlo Collision (PIC/MCC) analysis of the problem of detection and identification of impurities in the nonlocal plasma of gas discharge using the Plasma Electron Spectroscopy (PLES) method. For this purpose, 1d3v PIC/MCC code for numerical simulation of glow discharge with nonlocal electron energy distribution function is developed. The elastic, excitation, and ionization collisions between electron-neutral pairs and isotropic scattering and charge exchange collisions between ion-neutral pairs and Penning ionizations are taken into account. Applicability of the numerical code is verified under the Radio-Frequency capacitively coupled discharge conditions. The efficiency of the code is increased by its parallelization using Open Message Passing Interface. As a demonstration of the PLES method, parallel PIC/MCC code is applied to the direct current glow discharge in helium doped with a small amount of argon. Numerical results are consistent with the theoretical analysis of formation of nonlocal EEDF and existing experimental data.
An introduction to Monte Carlo methods
Walter, J. -C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo sim
Research on GPU Acceleration for Monte Carlo Criticality Calculation
Xu, Qi; Yu, Ganglin; Wang, Kan
2014-06-01
The Monte Carlo neutron transport method can be naturally parallelized by multi-core architectures due to the dependency between particles during the simulation. The GPU+CPU heterogeneous parallel mode has become an increasingly popular way of parallelism in the field of scientific supercomputing. Thus, this work focuses on the GPU acceleration method for the Monte Carlo criticality simulation, as well as the computational efficiency that GPUs can bring. The "neutron transport step" is introduced to increase the GPU thread occupancy. In order to test the sensitivity of the MC code's complexity, a 1D one-group code and a 3D multi-group general purpose code are respectively transplanted to GPUs, and the acceleration effects are compared. The result of numerical experiments shows considerable acceleration effect of the "neutron transport step" strategy. However, the performance comparison between the 1D code and the 3D code indicates the poor scalability of MC codes on GPUs.
Challenges of Monte Carlo Transport
Energy Technology Data Exchange (ETDEWEB)
Long, Alex Roberts [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-10
These are slides from a presentation for Parallel Summer School at Los Alamos National Laboratory. Solving discretized partial differential equations (PDEs) of interest can require a large number of computations. We can identify concurrency to allow parallel solution of discrete PDEs. Simulated particles histories can be used to solve the Boltzmann transport equation. Particle histories are independent in neutral particle transport, making them amenable to parallel computation. Physical parameters and method type determine the data dependencies of particle histories. Data requirements shape parallel algorithms for Monte Carlo. Then, Parallel Computational Physics and Parallel Monte Carlo are discussed and, finally, the results are given. The mesh passing method greatly simplifies the IMC implementation and allows simple load-balancing. Using MPI windows and passive, one-sided RMA further simplifies the implementation by removing target synchronization. The author is very interested in implementations of PGAS that may allow further optimization for one-sided, read-only memory access (e.g. Open SHMEM). The MPICH_RMA_OVER_DMAPP option and library is required to make one-sided messaging scale on Trinitite - Moonlight scales poorly. Interconnect specific libraries or functions are likely necessary to ensure performance. BRANSON has been used to directly compare the current standard method to a proposed method on idealized problems. The mesh passing algorithm performs well on problems that are designed to show the scalability of the particle passing method. BRANSON can now run load-imbalanced, dynamic problems. Potential avenues of improvement in the mesh passing algorithm will be implemented and explored. A suite of test problems that stress DD methods will elucidate a possible path forward for production codes.
Energy Technology Data Exchange (ETDEWEB)
Abanades, A., E-mail: abanades@etsii.upm.es [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Alvarez-Velarde, F.; Gonzalez-Romero, E.M. [Centro de Investigaciones Medioambientales y Tecnologicas (CIEMAT), Avda. Complutense, 40, Ed. 17, 28040 Madrid (Spain); Ismailov, K. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Lafuente, A. [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Nishihara, K. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan); Saito, M. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Stanculescu, A. [International Atomic Energy Agency (IAEA), Vienna (Austria); Sugawara, T. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan)
2013-01-15
Highlights: Black-Right-Pointing-Pointer TARC experiment benchmark capture rates results. Black-Right-Pointing-Pointer Utilization of updated databases, included ADSLib. Black-Right-Pointing-Pointer Self-shielding effect in reactor design for transmutation. Black-Right-Pointing-Pointer Effect of Lead nuclear data. - Abstract: The design of Accelerator Driven Systems (ADS) requires the development of simulation tools that are able to describe in a realistic way their nuclear performance and transmutation rate capability. In this publication, we present an evaluation of state of the art Monte Carlo design tools to assess their performance concerning transmutation of long-lived fission products. This work, performed under the umbrella of the International Atomic Energy Agency, analyses two important aspects for transmutation systems: moderation on Lead and neutron captures of {sup 99}Tc, {sup 127}I and {sup 129}I. The analysis of the results shows how shielding effects due to the resonances at epithermal energies of these nuclides affects strongly their transmutation rate. The results suggest that some research effort should be undertaken to improve the quality of Iodine nuclear data at epithermal and fast neutron energy to obtain a reliable transmutation estimation.
Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.
2014-01-01
This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle
Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.
2014-01-01
This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle.
Welberry, T R; Heerdegen, A P
2003-12-01
A recently developed method for fitting a Monte Carlo computer-simulation model to observed single-crystal diffuse X-ray scattering has been used to study the diffuse scattering in 4,4'-dimethoxybenzil, C16H14O4. A model involving only nine parameters, consisting of seven intermolecular force constants and two intramolecular torsional force constants, was refined to give an agreement factor, omegaR = [sigma omega(deltaI)2/sigma omegaI2(obs)](1/2), of 18.1% for 118 918 data points in two sections of data. The model was purely thermal in nature. The analysis has shown that the most prominent features of the diffraction patterns, viz. diffuse streaks that occur normal to the [101] direction, are due to longitudinal displacement correlations along chains of molecules extending in this direction. These displacements are transmitted from molecule to molecule via contacts involving pairs of hydrogen bonds between adjacent methoxy groups. In contrast to an earlier study of benzil itself, it was not found to be possible to determine, with any degree of certainty, the torsional force constants for rotations about the single bonds in the molecule. It is supposed that this result may be due to the limited data available in the present study.
King, Martin D; Crowder, Martin J; Hand, David J; Harris, Neil G; Williams, Stephen R; Obrenovitch, Tihomir P; Gadian, David G
2003-06-01
Markov chain Monte Carlo simulation was used in a reanalysis of the longitudinal data obtained by Harris et al. (J Cereb Blood Flow Metab 20:28-36) in a study of the direct current (DC) potential and apparent diffusion coefficient (ADC) responses to focal ischemia. The main purpose was to provide a formal analysis of the temporal relationship between the ADC and DC responses, to explore the possible involvement of a common latent (driving) process. A Bayesian nonlinear hierarchical random coefficients model was adopted. DC and ADC transition parameter posterior probability distributions were generated using three parallel Markov chains created using the Metropolis algorithm. Particular attention was paid to the within-subject differences between the DC and ADC time course characteristics. The results show that the DC response is biphasic, whereas the ADC exhibits monophasic behavior, and that the two DC components are each distinguishable from the ADC response in their time dependencies. The DC and ADC changes are not, therefore, driven by a common latent process. This work demonstrates a general analytical approach to the multivariate, longitudinal data-processing problem that commonly arises in stroke and other biomedical research.
Analysis of Investment Risk Based on Monte Carlo Method%蒙特卡洛法在投资项目风险分析中的应用
Institute of Scientific and Technical Information of China (English)
王霞; 张本涛; 马庆
2011-01-01
本文以经济净现值为评价指标来度量项目的投资风险,确定各影响因素的概率分布,建立了基于三角分布的风险评价的随机模型,采用蒙特卡罗方法进行模拟,利用MATLAB编程实现,得到投资项目的净现值频数分布的直方图和累计频率曲线图,并对模拟结果进行统计和分析,可得到净现值的平均预测值以及风险率,为评价投资项目的风险提供了理论依据.%In this paper, based on the important economic evaluation index NPV, the paper measures the risk of investment projects, determines the probability distribution of various factors, establishes the risk evaluation of the stochastic model based on the triangular distribution, which is simulated using Monte Carlo method, and realises by MATLAB programming, then can get the frequency distribution histograms and cumulative frequency curve of the net present value of investment projects, predictive average value and the rate risk are obtained by statistic analysis,providing a theoretical basis for risk evaluation of investment projects.
Directory of Open Access Journals (Sweden)
Bin Zhou
Full Text Available It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs, a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF and potential impact fraction (PIF of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making.
Mudra, Regina M.; Nadler, Andreas; Keller, Emanuela; Niederer, Peter F.
2006-07-01
Near-infrared spectroscopy (NIRS) combined with indocyanine green (ICG) dilution is applied externally on the head to determine the cerebral hemodynamics of neurointensive care patients. We applied Monte Carlo simulation for the analysis of a number of problems associated with this method. First, the contamination of the optical density (OD) signal due to the extracerebral tissue was assessed. Second, the measured OD signal depends essentially on the relative blood content (with respect to its absorption) in the various transilluminated tissues. To take this into account, we weighted the calculated densities of the photon distribution under baseline conditions within the different tissues with the changes and aberration of the relative blood volumes that are typically observed under healthy and pathologic conditions. Third, in case of NIRS ICG dye dilution, an ICG bolus replaces part of the blood such that a transient change of absorption in the brain tissues occurs that can be recorded in the OD signal. Our results indicate that for an exchange fraction of Δ=30% of the relative blood volume within the intracerebral tissue, the OD signal is determined from 64 to 74% by the gray matter and between 8 to 16% by the white matter maximally for a distance of d=4.5 cm.
Fission Matrix Capability for MCNP Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory
2012-09-05
In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a
Novel Quantum Monte Carlo Approaches for Quantum Liquids
Rubenstein, Brenda M.
the eventual hope is to apply this algorithm to the exploration of yet unidentified high-pressure, low-temperature phases of hydrogen, I employ this algorithm to determine whether or not quantum hard spheres can form a low-temperature bcc solid if exchange is not taken into account. In the final chapter of this thesis, I use Path Integral Monte Carlo once again to explore whether glassy para-hydrogen exhibits superfluidity. Physicists have long searched for ways to coax hydrogen into becoming a superfluid. I present evidence that, while glassy hydrogen does not crystallize at the temperatures at which hydrogen might become a superfluid, it nevertheless does not exhibit superfluidity. This is because the average binding energy per p-H2 molecule poses a severe barrier to exchange regardless of whether the system is crystalline. All in all, this work extends the reach of Quantum Monte Carlo methods to new systems and brings the power of existing methods to bear on new problems. Portions of this work have been published in Rubenstein, PRE (2010) and Rubenstein, PRA (2012) [167;169]. Other papers not discussed here published during my Ph.D. include Rubenstein, BPJ (2008) and Rubenstein, PRL (2012) [166;168]. The work in Chapters 6 and 7 is currently unpublished. [166] Brenda M. Rubenstein, Ivan Coluzza, and Mark A. Miller. Controlling the folding and substrate-binding of proteins using polymer brushes. Physical Review Letters, 108(20):208104, May 2012. [167] Brenda M. Rubenstein, J.E. Gubernatis, and J.D. Doll. Comparative monte carlo efficiency by monte carlo analysis. Physical Review E, 82(3):036701, September 2010. [168] Brenda M. Rubenstein and Laura J. Kaufman. The role of extracellular matrix in glioma invasion: A cellular potts model approach. Biophysical Journal, 95(12):5661-- 5680, December 2008. [169] Brenda M. Rubenstein, Shiwei Zhang, and David R. Reichman. Finite-temperature auxiliary-field quantum monte carlo for bose-fermi mixtures. Physical Review A, 86
DEFF Research Database (Denmark)
Hobolth, Asger
2008-01-01
-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high...
Energy Technology Data Exchange (ETDEWEB)
Olah, G.A.; Trewhella, J.
1995-11-01
Analysis of scattering data based on a Monte Carlo integration method was used to obtain a low resolution model of the 4Ca2+.troponin c.troponin I complex. This modeling method allows rapid testing of plausible structures where the best fit model can be ascertained by a comparison between model structure scattering profiles and measured scattering data. In the best fit model, troponin I appears as a spiral structure that wraps about 4CA2+.trophonin C which adopts an extended dumbell conformation similar to that observed in the crystal structures of troponin C. The Monte Carlo modeling method can be applied to other biological systems in which detailed structural information is lacking.
He, Li; Huang, Gordon; Lu, Hongwei; Wang, Shuo; Xu, Yi
2012-06-15
This paper presents a global uncertainty and sensitivity analysis (GUSA) framework based on global sensitivity analysis (GSA) and generalized likelihood uncertainty estimation (GLUE) methods. Quasi-Monte Carlo (QMC) is employed by GUSA to obtain realizations of uncertain parameters, which are then input to the simulation model for analysis. Compared to GLUE, GUSA can not only evaluate global sensitivity and uncertainty of modeling parameter sets, but also quantify the uncertainty in modeling prediction sets. Moreover, GUSA's another advantage lies in alleviation of computational effort, since those globally-insensitive parameters can be identified and removed from the uncertain-parameter set. GUSA is applied to a practical petroleum-contaminated site in Canada to investigate free product migration and recovery processes under aquifer remediation operations. Results from global sensitivity analysis show that (1) initial free product thickness has the most significant impact on total recovery volume but least impact on residual free product thickness and recovery rate; (2) total recovery volume and recovery rate are sensitive to residual LNAPL phase saturations and soil porosity. Results from uncertainty predictions reveal that the residual thickness would remain high and almost unchanged after about half-year of skimmer-well scheme; the rather high residual thickness (0.73-1.56 m 20 years later) indicates that natural attenuation would not be suitable for the remediation. The largest total recovery volume would be from water pumping, followed by vacuum pumping, and then skimmer. The recovery rates of the three schemes would rapidly decrease after 2 years (less than 0.05 m(3)/day), thus short-term remediation is not suggested.
Lunar Regolith Albedos Using Monte Carlos
Wilson, T. L.; Andersen, V.; Pinsky, L. S.
2003-01-01
The analysis of planetary regoliths for their backscatter albedos produced by cosmic rays (CRs) is important for space exploration and its potential contributions to science investigations in fundamental physics and astrophysics. Albedos affect all such experiments and the personnel that operate them. Groups have analyzed the production rates of various particles and elemental species by planetary surfaces when bombarded with Galactic CR fluxes, both theoretically and by means of various transport codes, some of which have emphasized neutrons. Here we report on the preliminary results of our current Monte Carlo investigation into the production of charged particles, neutrons, and neutrinos by the lunar surface using FLUKA. In contrast to previous work, the effects of charm are now included.
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction......, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....
An Overview of the Monte Carlo Application ToolKit (MCATK)
Energy Technology Data Exchange (ETDEWEB)
Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-01-07
MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.
Directory of Open Access Journals (Sweden)
Yongshuai Jiang
Full Text Available Traditional permutation (TradPerm tests are usually considered the gold standard for multiple testing corrections. However, they can be difficult to complete for the meta-analyses of genetic association studies based on multiple single nucleotide polymorphism loci as they depend on individual-level genotype and phenotype data to perform random shuffles, which are not easy to obtain. Most meta-analyses have therefore been performed using summary statistics from previously published studies. To carry out a permutation using only genotype counts without changing the size of the TradPerm P-value, we developed a Monte Carlo permutation (MCPerm method. First, for each study included in the meta-analysis, we used a two-step hypergeometric distribution to generate a random number of genotypes in cases and controls. We then carried out a meta-analysis using these random genotype data. Finally, we obtained the corrected permutation P-value of the meta-analysis by repeating the entire process N times. We used five real datasets and five simulation datasets to evaluate the MCPerm method and our results showed the following: (1 MCPerm requires only the summary statistics of the genotype, without the need for individual-level data; (2 Genotype counts generated by our two-step hypergeometric distributions had the same distributions as genotype counts generated by shuffling; (3 MCPerm had almost exactly the same permutation P-values as TradPerm (r = 0.999; P<2.2e-16; (4 The calculation speed of MCPerm is much faster than that of TradPerm. In summary, MCPerm appears to be a viable alternative to TradPerm, and we have developed it as a freely available R package at CRAN: http://cran.r-project.org/web/packages/MCPerm/index.html.
Lattice gauge theories and Monte Carlo simulations
Rebbi, Claudio
1983-01-01
This volume is the most up-to-date review on Lattice Gauge Theories and Monte Carlo Simulations. It consists of two parts. Part one is an introductory lecture on the lattice gauge theories in general, Monte Carlo techniques and on the results to date. Part two consists of important original papers in this field. These selected reprints involve the following: Lattice Gauge Theories, General Formalism and Expansion Techniques, Monte Carlo Simulations. Phase Structures, Observables in Pure Gauge Theories, Systems with Bosonic Matter Fields, Simulation of Systems with Fermions.
Fast quantum Monte Carlo on a GPU
Lutsyshyn, Y
2013-01-01
We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.
Monte Carlo approaches to light nuclei
Energy Technology Data Exchange (ETDEWEB)
Carlson, J.
1990-01-01
Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.
Directory of Open Access Journals (Sweden)
Steven M Carr
Full Text Available Phylogenomic analysis of highly-resolved intraspecific phylogenies obtained from complete mitochondrial DNA genomes has had great success in clarifying relationships within and among human populations, but has found limited application in other wild species. Analytical challenges include assessment of random versus non-random phylogeographic distributions, and quantification of differences in tree topologies among populations. Harp Seals (Pagophilus groenlandicus Erxleben, 1777 have a biogeographic distribution based on four discrete trans-Atlantic breeding and whelping populations located on "fast ice" attached to land in the White Sea, Greenland Sea, the Labrador ice Front, and Southern Gulf of St Lawrence. This East to West distribution provides a set of a priori phylogeographic hypotheses. Outstanding biogeographic questions include the degree of genetic distinctiveness among these populations, in particular between the Greenland Sea and White Sea grounds. We obtained complete coding-region DNA sequences (15,825 bp for 53 seals. Each seal has a unique mtDNA genome sequence, which differ by 6 ~ 107 substitutions. Six major clades / groups are detectable by parsimony, neighbor-joining, and Bayesian methods, all of which are found in breeding populations on either side of the Atlantic. The species coalescent is at 180 KYA; the most recent clade, which accounts for 66% of the diversity, reflects an expansion during the mid-Wisconsinan glaciation 40~60 KYA. FST is significant only between the White Sea and Greenland Sea or Ice Front populations. Hierarchal AMOVA of 2-, 3-, or 4-island models identifies small but significant ΦSC among populations within groups, but not among groups. A novel Monte-Carlo simulation indicates that the observed distribution of individuals within breeding populations over the phylogenetic tree requires significantly fewer dispersal events than random expectation, consistent with island or a priori East to West 2- or 3
Fransson, Martin Niclas; Barregard, Lars; Sallsten, Gerd; Akerstrom, Magnus; Johanson, Gunnar
2014-10-01
The health effects of low-level chronic exposure to cadmium are increasingly recognized. To improve the risk assessment, it is essential to know the relation between cadmium intake, body burden, and biomarker levels of cadmium. We combined a physiologically-based toxicokinetic (PBTK) model for cadmium with a data set from healthy kidney donors to re-estimate the model parameters and to test the effects of gender and serum ferritin on systemic uptake. Cadmium levels in whole blood, blood plasma, kidney cortex, and urinary excretion from 82 men and women were used to calculate posterior distributions for model parameters using Markov-chain Monte Carlo analysis. For never- and ever-smokers combined, the daily systemic uptake was estimated at 0.0063 μg cadmium/kg body weight in men, with 35% increased uptake in women and a daily uptake of 1.2 μg for each pack-year per calendar year of smoking. The rate of urinary excretion from cadmium accumulated in the kidney was estimated at 0.000042 day(-1), corresponding to a half-life of 45 years in the kidneys. We have provided an improved model of cadmium kinetics. As the new parameter estimates derive from a single study with measurements in several compartments in each individual, these new estimates are likely to be more accurate than the previous ones where the data used originated from unrelated data sets. The estimated urinary excretion of cadmium accumulated in the kidneys was much lower than previous estimates, neglecting this finding may result in a marked under-prediction of the true kidney burden.
Ku, B.; Nam, M.
2012-12-01
Neutron logging has been widely used to estimate neutron porosity to evaluate formation properties in oil industry. More recently, neutron logging has been highlighted for monitoring the behavior of CO2 injected into reservoir for geological CO2 sequestration. For a better understanding of neutron log interpretation, Monte Carlo N-Particle (MCNP) algorithm is used to illustrate the response of a neutron tool. In order to obtain calibration curves for the neutron tool, neutron responses are simulated in water-filled limestone, sandstone and dolomite formations of various porosities. Since the salinities (concentration of NaCl) of borehole fluid and formation water are important factors for estimating formation porosity, we first compute and analyze neutron responses for brine-filled formations with different porosities. Further, we consider changes in brine saturation of a reservoir due to hydrocarbon production or geological CO2 sequestration to simulate corresponding neutron logging data. As gas saturation decreases, measured neutron porosity confirms gas effects on neutron logging, which is attributed to the fact that gas has slightly smaller number of hydrogen than brine water. In the meantime, increase in CO2 saturation due to CO2 injection reduces measured neutron porosity giving a clue to estimation the CO2 saturation, since the injected CO2 substitute for the brine water. A further analysis on the reduction gives a strategy for estimating CO2 saturation based on time-lapse neutron logging. This strategy can help monitoring not only geological CO2 sequestration but also CO2 flood for enhanced-oil-recovery. Acknowledgements: This work was supported by the Energy Efficiency & Resources of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Knowledge Economy (No. 2012T100201588). Myung Jin Nam was partially supported by the National Research Foundation of Korea(NRF) grant funded by the Korea
11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing
Nuyens, Dirk
2016-01-01
This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.
Monte Carlo simulations for plasma physics
Energy Technology Data Exchange (ETDEWEB)
Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X. [National Inst. for Fusion Science, Toki, Gifu (Japan)
2000-07-01
Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)
Improved Monte Carlo Renormalization Group Method
Gupta, R.; Wilson, K. G.; Umrigar, C.
1985-01-01
An extensive program to analyze critical systems using an Improved Monte Carlo Renormalization Group Method (IMCRG) being undertaken at LANL and Cornell is described. Here we first briefly review the method and then list some of the topics being investigated.
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Smart detectors for Monte Carlo radiative transfer
Baes, Maarten
2008-01-01
Many optimization techniques have been invented to reduce the noise that is inherent in Monte Carlo radiative transfer simulations. As the typical detectors used in Monte Carlo simulations do not take into account all the information contained in the impacting photon packages, there is still room to optimize this detection process and the corresponding estimate of the surface brightness distributions. We want to investigate how all the information contained in the distribution of impacting photon packages can be optimally used to decrease the noise in the surface brightness distributions and hence to increase the efficiency of Monte Carlo radiative transfer simulations. We demonstrate that the estimate of the surface brightness distribution in a Monte Carlo radiative transfer simulation is similar to the estimate of the density distribution in an SPH simulation. Based on this similarity, a recipe is constructed for smart detectors that take full advantage of the exact location of the impact of the photon pack...
Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Quantum Monte Carlo Calculations of Light Nuclei
Pieper, Steven C
2007-01-01
During the last 15 years, there has been much progress in defining the nuclear Hamiltonian and applying quantum Monte Carlo methods to the calculation of light nuclei. I describe both aspects of this work and some recent results.
2014-01-01
M.Phil. (Energy Studies) This dissertation details the radiation shielding analysis and optimization performed to design a shield for the mineral-PET (Positron Emission Tomography) facility. PET is a nuclear imaging technique currently used in diagnostic medicine. The technique is based on the detection of 511 keV coincident and co-linear photons produced from the annihilation of a positron (produced by a positron emitter) and a nearby electron. The technique is now being considered for th...
Monte Carlo Hamiltonian:Inverse Potential
Institute of Scientific and Technical Information of China (English)
LUO Xiang-Qian; CHENG Xiao-Ni; Helmut KR(O)GER
2004-01-01
The Monte Carlo Hamiltonian method developed recently allows to investigate the ground state and low-lying excited states of a quantum system,using Monte Carlo(MC)algorithm with importance sampling.However,conventional MC algorithm has some difficulties when applied to inverse potentials.We propose to use effective potential and extrapolation method to solve the problem.We present examples from the hydrogen system.
The Feynman Path Goes Monte Carlo
Sauer, Tilman
2001-01-01
Path integral Monte Carlo (PIMC) simulations have become an important tool for the investigation of the statistical mechanics of quantum systems. I discuss some of the history of applying the Monte Carlo method to non-relativistic quantum systems in path-integral representation. The principle feasibility of the method was well established by the early eighties, a number of algorithmic improvements have been introduced in the last two decades.
Self-consistent kinetic lattice Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Horsfield, A.; Dunham, S.; Fujitani, Hideaki
1999-07-01
The authors present a brief description of a formalism for modeling point defect diffusion in crystalline systems using a Monte Carlo technique. The main approximations required to construct a practical scheme are briefly discussed, with special emphasis on the proper treatment of charged dopants and defects. This is followed by tight binding calculations of the diffusion barrier heights for charged vacancies. Finally, an application of the kinetic lattice Monte Carlo method to vacancy diffusion is presented.
Monte Carlo Algorithms for Linear Problems
DIMOV, Ivan
2000-01-01
MSC Subject Classification: 65C05, 65U05. Monte Carlo methods are a powerful tool in many fields of mathematics, physics and engineering. It is known, that these methods give statistical estimates for the functional of the solution by performing random sampling of a certain chance variable whose mathematical expectation is the desired functional. Monte Carlo methods are methods for solving problems using random variables. In the book [16] edited by Yu. A. Shreider one can find the followin...
More about Zener drag studies with Monte Carlo simulations
Di Prinzio, Carlos L.; Druetta, Esteban; Nasello, Olga Beatriz
2013-03-01
Grain growth (GG) processes in the presence of second-phase and stationary particles have been widely studied but the results found are inconsistent. We present new GG simulations in two- and three-dimensional (2D and 3D) polycrystalline samples with second phase stationary particles, using the Monte Carlo technique. Simulations using values of particle concentration greater than 15% and particle radii different from 1 or 3 are performed, thus covering a range of particle radii and concentrations not previously studied. It is shown that only the results for 3D samples follow Zener's law.
Sunil, C
2016-04-01
The neutron ambient dose equivalent outside the radiation shield of a proton therapy cyclotron vault is estimated using the unshielded dose equivalent rates and the attenuation lengths obtained from the literature and by simulations carried out with the FLUKA Monte Carlo radiation transport code. The source terms derived from the literature and that obtained from the FLUKA calculations differ by a factor of 2-3, while the attenuation lengths obtained from the literature differ by 20-40%. The instantaneous dose equivalent rates outside the shield differ by a few orders of magnitude, not only in comparison with the Monte Carlo simulation results, but also with the results obtained by line of sight attenuation calculations with the different parameters obtained from the literature. The attenuation of neutrons caused by the presence of bulk iron, such as magnet yokes is expected to reduce the dose equivalent by as much as a couple of orders of magnitude outside the shield walls.
Energy Technology Data Exchange (ETDEWEB)
Rodenas, Jose [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)], E-mail: jrodenas@iqn.upv.es; Gallardo, Sergio; Ballester, Silvia; Primault, Virginie [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain); Ortiz, Josefina [Laboratorio de Radiactividad Ambiental, Universidad Politecnica de Valencia, Apartado 22012, E-46071 Valencia (Spain)
2007-10-15
A gamma spectrometer including an HP Ge detector is commonly used for environmental radioactivity measurements. The efficiency of the detector should be calibrated for each geometry considered. Simulation of the calibration procedure with a validated computer program is an important auxiliary tool for environmental radioactivity laboratories. The MCNP code based on the Monte Carlo method has been applied to simulate the detection process in order to obtain spectrum peaks and determine the efficiency curve for each modelled geometry. The source used for measurements was a calibration mixed radionuclide gamma reference solution, covering a wide energy range (50-2000 keV). Two measurement geometries - Marinelli beaker and Petri boxes - as well as different materials - water, charcoal, sand - containing the source have been considered. Results obtained from the Monte Carlo model have been compared with experimental measurements in the laboratory in order to validate the model.
Error in Monte Carlo, quasi-error in Quasi-Monte Carlo
Kleiss, R H
2006-01-01
While the Quasi-Monte Carlo method of numerical integration achieves smaller integration error than standard Monte Carlo, its use in particle physics phenomenology has been hindered by the abscence of a reliable way to estimate that error. The standard Monte Carlo error estimator relies on the assumption that the points are generated independently of each other and, therefore, fails to account for the error improvement advertised by the Quasi-Monte Carlo method. We advocate the construction of an estimator of stochastic nature, based on the ensemble of pointsets with a particular discrepancy value. We investigate the consequences of this choice and give some first empirical results on the suggested estimators.
Analysis of dpa rates in the HFIR reactor vessel using a hybrid Monte Carlo/deterministic method
Energy Technology Data Exchange (ETDEWEB)
Blakeman, Edward [Retired
2016-01-01
The Oak Ridge High Flux Isotope Reactor (HFIR), which began full-power operation in 1966, provides one of the highest steady-state neutron flux levels of any research reactor in the world. An ongoing vessel integrity analysis program to assess radiation-induced embrittlement of the HFIR reactor vessel requires the calculation of neutron and gamma displacements per atom (dpa), particularly at locations near the beam tube nozzles, where radiation streaming effects are most pronounced. In this study we apply the Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS) technique in the ADVANTG code to develop variance reduction parameters for use in the MCNP radiation transport code. We initially evaluated dpa rates for dosimetry capsule locations, regions in the vicinity of the HB-2 beamline, and the vessel beltline region. We then extended the study to provide dpa rate maps using three-dimensional cylindrical mesh tallies that extend from approximately 12 below to approximately 12 above the axial extent of the core. The mesh tally structures contain over 15,000 mesh cells, providing a detailed spatial map of neutron and photon dpa rates at all locations of interest. Relative errors in the mesh tally cells are typically less than 1%.
Analysis of dpa Rates in the HFIR Reactor Vessel using a Hybrid Monte Carlo/Deterministic Method*
Directory of Open Access Journals (Sweden)
Risner J.M.
2016-01-01
Full Text Available The Oak Ridge High Flux Isotope Reactor (HFIR, which began full-power operation in 1966, provides one of the highest steady-state neutron flux levels of any research reactor in the world. An ongoing vessel integrity analysis program to assess radiation-induced embrittlement of the HFIR reactor vessel requires the calculation of neutron and gamma displacements per atom (dpa, particularly at locations near the beam tube nozzles, where radiation streaming effects are most pronounced. In this study we apply the Forward-Weighted Consistent Adjoint Driven Importance Sampling (FW-CADIS technique in the ADVANTG code to develop variance reduction parameters for use in the MCNP radiation transport code. We initially evaluated dpa rates for dosimetry capsule locations, regions in the vicinity of the HB-2 beamline, and the vessel beltline region. We then extended the study to provide dpa rate maps using three-dimensional cylindrical mesh tallies that extend from approximately 12 in. below to approximately 12 in. above the height of the core. The mesh tally structures contain over 15,000 mesh cells, providing a detailed spatial map of neutron and photon dpa rates at all locations of interest. Relative errors in the mesh tally cells are typically less than 1%.
The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code
Energy Technology Data Exchange (ETDEWEB)
Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.
1999-07-01
This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the
Monte Carlo method with heuristic adjustment for irregularly shaped food product volume measurement.
Siswantoro, Joko; Prabuwono, Anton Satria; Abdullah, Azizi; Idrus, Bahari
2014-01-01
Volume measurement plays an important role in the production and processing of food products. Various methods have been proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs volume measurements using random points. Monte Carlo method only requires information regarding whether random points fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images. Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the water displacement method. In addition, the proposed method is more accurate and faster than the space carving method.
X-ray imaging plate performance investigation based on a Monte Carlo simulation tool
Energy Technology Data Exchange (ETDEWEB)
Yao, M., E-mail: philippe.duvauchelle@insa-lyon.fr [Laboratoire Vibration Acoustique (LVA), INSA de Lyon, 25 Avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Duvauchelle, Ph.; Kaftandjian, V. [Laboratoire Vibration Acoustique (LVA), INSA de Lyon, 25 Avenue Jean Capelle, 69621 Villeurbanne Cedex (France); Peterzol-Parmentier, A. [AREVA NDE-Solutions, 4 Rue Thomas Dumorey, 71100 Chalon-sur-Saône (France); Schumm, A. [EDF R& D SINETICS, 1 Avenue du Général de Gaulle, 92141 Clamart Cedex (France)
2015-01-01
Computed radiography (CR) based on imaging plate (IP) technology represents a potential replacement technique for traditional film-based industrial radiography. For investigating the IP performance especially at high energies, a Monte Carlo simulation tool based on PENELOPE has been developed. This tool tracks separately direct and secondary radiations, and monitors the behavior of different particles. The simulation output provides 3D distribution of deposited energy in IP and evaluation of radiation spectrum propagation allowing us to visualize the behavior of different particles and the influence of different elements. A detailed analysis, on the spectral and spatial responses of IP at different energies up to MeV, has been performed. - Highlights: • A Monte Carlo tool for imaging plate (IP) performance investigation is presented. • The tool outputs 3D maps of energy deposition in IP due to different signals. • The tool also provides the transmitted spectra along the radiation propagation. • An industrial imaging case is simulated with the presented tool. • A detailed analysis, on the spectral and spatial responses of IP, is presented.
Monte Carlo Volcano Seismic Moment Tensors
Waite, G. P.; Brill, K. A.; Lanza, F.
2015-12-01
Inverse modeling of volcano seismic sources can provide insight into the geometry and dynamics of volcanic conduits. But given the logistical challenges of working on an active volcano, seismic networks are typically deficient in spatial and temporal coverage; this potentially leads to large errors in source models. In addition, uncertainties in the centroid location and moment-tensor components, including volumetric components, are difficult to constrain from the linear inversion results, which leads to a poor understanding of the model space. In this study, we employ a nonlinear inversion using a Monte Carlo scheme with the objective of defining robustly resolved elements of model space. The model space is randomized by centroid location and moment tensor eigenvectors. Point sources densely sample the summit area and moment tensors are constrained to a randomly chosen geometry within the inversion; Green's functions for the random moment tensors are all calculated from modeled single forces, making the nonlinear inversion computationally reasonable. We apply this method to very-long-period (VLP) seismic events that accompany minor eruptions at Fuego volcano, Guatemala. The library of single force Green's functions is computed with a 3D finite-difference modeling algorithm through a homogeneous velocity-density model that includes topography, for a 3D grid of nodes, spaced 40 m apart, within the summit region. The homogenous velocity and density model is justified by long wavelength of VLP data. The nonlinear inversion reveals well resolved model features and informs the interpretation through a better understanding of the possible models. This approach can also be used to evaluate possible station geometries in order to optimize networks prior to deployment.
Energy Technology Data Exchange (ETDEWEB)
Lima, Gabriel A.C. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Inst. de Geociencias. Lab. de Analises Geoeconomicas de Recursos Minerais]. E-mail: gabriel@ige.unicamp.br; Vidal, Alexandre C.; Suslick, Saul B. [Universidade Estadual de Campinas (UNICAMP), SP (Brazil) Inst. de Geociencias. Dept. de Geologia e Recursos Naturais]. E-mails: vidal@ige.unicamp.br; suslick@ige.unicamp.br
2006-04-15
In many papers dealing with estimation of oil reserves, engineers usually assume that well porosity can be modeled as a Gaussian distribution, that is, under this assumption the expected value of porosity can be estimated from the average porosity values from well log and petrophysical data. But, other distributions can be used to model local porosity when Gaussian distribution cannot fit sample data. In this paper, using actual porosity data of a 3-NA-002-RJS well from the Campos Basin, it is shown that for a selected interval, the logistic distribution fits the data better than other distributions and its expected value should be used to estimate the well porosities of the entire population. In such cases, as numerical analysis shows, using arithmetic mean instead of expected value may give rise to errors. The data shows that using an average as porosity estimator will overestimate the P90 and underestimate the P10 estimates. (author)
Knupp, L S; Veloso, C M; Marcondes, M I; Silveira, T S; Silva, A L; Souza, N O; Knupp, S N R; Cannas, A
2016-03-01
The aim of this study was to analyze the economic viability of producing dairy goat kids fed liquid diets in alternative of goat milk and slaughtered at two different ages. Forty-eight male newborn Saanen and Alpine kids were selected and allocated to four groups using a completely randomized factorial design: goat milk (GM), cow milk (CM), commercial milk replacer (CMR) and fermented cow colostrum (FC). Each group was then divided into two groups: slaughter at 60 and 90 days of age. The animals received Tifton hay and concentrate ad libitum. The values of total costs of liquid and solid feed plus labor, income and average gross margin were calculated. The data were then analyzed using the Monte Carlo techniques with the @Risk 5.5 software, with 1000 iterations of the variables being studied through the model. The kids fed GM and CMR generated negative profitability values when slaughtered at 60 days (US$ -16.4 and US$ -2.17, respectively) and also at 90 days (US$ -30.8 and US$ -0.18, respectively). The risk analysis showed that there is a 98% probability that profitability would be negative when GM is used. In this regard, CM and FC presented low risk when the kids were slaughtered at 60 days (8.5% and 21.2%, respectively) and an even lower risk when animals were slaughtered at 90 days (5.2% and 3.8%, respectively). The kids fed CM and slaughtered at 90 days presented the highest average gross income (US$ 67.88) and also average gross margin (US$ 18.43/animal). For the 60-day rearing regime to be economically viable, the CMR cost should not exceed 11.47% of the animal-selling price. This implies that the replacer cannot cost more than US$ 0.39 and 0.43/kg for the 60- and 90-day feeding regimes, respectively. The sensitivity analysis showed that the variables with the greatest impact on the final model's results were animal selling price, liquid diet cost, final weight at slaughter and labor. In conclusion, the production of male dairy goat kids can be economically
Energy Technology Data Exchange (ETDEWEB)
Burkatzki, Mark Thomas
2008-07-01
The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)
Ficaro, Edward Patrick
The ^{252}Cf -source-driven noise analysis (CSDNA) requires the measurement of the cross power spectral density (CPSD) G_ {23}(omega), between a pair of neutron detectors (subscripts 2 and 3) located in or near the fissile assembly, and the CPSDs, G_{12}( omega) and G_{13}( omega), between the neutron detectors and an ionization chamber 1 containing ^{252}Cf also located in or near the fissile assembly. The key advantage of this method is that the subcriticality of the assembly can be obtained from the ratio of spectral densities,{G _sp{12}{*}(omega)G_ {13}(omega)over G_{11 }(omega)G_{23}(omega) },using a point kinetic model formulation which is independent of the detector's properties and a reference measurement. The multigroup, Monte Carlo code, KENO-NR, was developed to eliminate the dependence of the measurement on the point kinetic formulation. This code utilizes time dependent, analog neutron tracking to simulate the experimental method, in addition to the underlying nuclear physics, as closely as possible. From a direct comparison of simulated and measured data, the calculational model and cross sections are validated for the calculation, and KENO-NR can then be rerun to provide a distributed source k_ {eff} calculation. Depending on the fissile assembly, a few hours to a couple of days of computation time are needed for a typical simulation executed on a desktop workstation. In this work, KENO-NR demonstrated the ability to accurately estimate the measured ratio of spectral densities from experiments using capture detectors performed on uranium metal cylinders, a cylindrical tank filled with aqueous uranyl nitrate, and arrays of safe storage bottles filled with uranyl nitrate. Good agreement was also seen between simulated and measured values of the prompt neutron decay constant from the fitted CPSDs. Poor agreement was seen between simulated and measured results using composite ^6Li-glass-plastic scintillators at large subcriticalities for the tank of
Salem, Ahmed Hamed; Zhanel, George G; Ibrahim, Safaa A; Noreddin, Ayman M
2014-06-01
The aim of the present study was to compare the potential of ceftobiprole, dalbavancin, daptomycin, tigecycline, linezolid and vancomycin to achieve their requisite pharmacokinetic/pharmacodynamic (PK/PD) targets against methicillin-resistant Staphylococcus aureus isolates collected from intensive care unit (ICU) settings. Monte Carlo simulations were carried out to simulate the PK/PD indices of the investigated antimicrobials. The probability of target attainment (PTA) was estimated at minimum inhibitory concentration values ranging from 0.03 to 32 μg/mL to define the PK/PD susceptibility breakpoints. The cumulative fraction of response (CFR) was computed using minimum inhibitory concentration data from the Canadian National Intensive Care Unit study. Analysis of the simulation results suggested the breakpoints of 4 μg/mL for ceftobiprole (500 mg/2 h t.i.d.), 0.25 μg/mL for dalbavancin (1000 mg), 0.12 μg/mL for daptomycin (4 mg/kg q.d. and 6 mg/kg q.d.) and tigecycline (50 mg b.i.d.), and 2 μg/mL for linezolid (600 mg b.i.d.) and vancomycin (1 g b.i.d. and 1.5 g b.i.d.). The estimated CFR were 100, 100, 70.6, 88.8, 96.5, 82.4, 89.4, and 98.3% for ceftobiprole, dalbavancin, daptomycin (4 mg/kg/day), daptomycin (6 mg/kg/day), linezolid, tigecycline, vancomycin (1 g b.i.d.) and vancomycin (1.5 g b.i.d.), respectively. In conclusion, ceftobiprole and dalbavancin have the highest probability of achieving their requisite PK/PD targets against methicillin-resistant Staphylococcus aureus isolated from ICU settings. The susceptibility predictions suggested a reduction of the vancomycin breakpoint to 1 μg/mL.
Approaching Chemical Accuracy with Quantum Monte Carlo
Petruzielo, F R; Umrigar, C J
2012-01-01
A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space.
MCHITS: Monte Carlo based Method for Hyperlink Induced Topic Search on Networks
Directory of Open Access Journals (Sweden)
Zhaoyan Jin
2013-10-01
Full Text Available Hyperlink Induced Topic Search (HITS is the most authoritative and most widely used personalized ranking algorithm on networks. The HITS algorithm ranks nodes on networks according to power iteration, and has high complexity of computation. This paper models the HITS algorithm with the Monte Carlo method, and proposes Monte Carlo based algorithms for the HITS computation. Theoretical analysis and experiments show that the Monte Carlo based approximate computing of the HITS ranking reduces computing resources a lot while keeping higher accuracy, and is significantly better than related works
Monte Carlo Production Management at CMS
Boudoul, G.; Pol, A; Srimanobhas, P; Vlimant, J R; Franzoni, Giovanni
2015-01-01
The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events.During the runI of LHC (2010-2012), CMS has produced over 12 Billion simulated events,organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up).In order toaggregate the information needed for the configuration and prioritization of the events production,assure the book-keeping and of all the processing requests placed by the physics analysis groups,and to interface with the CMS production infrastructure,the web-based service Monte Carlo Management (McM) has been developed and put in production in 2012.McM is based on recent server infrastructure technology (CherryPy + java) and relies on a CouchDB database back-end.This contribution will coverthe one and half year of operational experience managing samples of simulated events for CMS,the evolution of its functionalitiesand the extension of its capabi...
Quantum Monte Carlo with Variable Spins
Melton, Cody A; Mitas, Lubos
2016-01-01
We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo (FPSODMC), we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn$_2$ molecules, as well as the electron affinities of the 6$p$ row elements in close agreement with experiments.
Quantum speedup of Monte Carlo methods.
Montanaro, Ashley
2015-09-08
Monte Carlo methods use random sampling to estimate numerical quantities which are hard to compute deterministically. One important example is the use in statistical physics of rapidly mixing Markov chains to approximately compute partition functions. In this work, we describe a quantum algorithm which can accelerate Monte Carlo methods in a very general setting. The algorithm estimates the expected output value of an arbitrary randomized or quantum subroutine with bounded variance, achieving a near-quadratic speedup over the best possible classical algorithm. Combining the algorithm with the use of quantum walks gives a quantum speedup of the fastest known classical algorithms with rigorous performance bounds for computing partition functions, which use multiple-stage Markov chain Monte Carlo techniques. The quantum algorithm can also be used to estimate the total variation distance between probability distributions efficiently.
Adiabatic optimization versus diffusion Monte Carlo methods
Jarret, Michael; Jordan, Stephen P.; Lackey, Brad
2016-10-01
Most experimental and theoretical studies of adiabatic optimization use stoquastic Hamiltonians, whose ground states are expressible using only real nonnegative amplitudes. This raises a question as to whether classical Monte Carlo methods can simulate stoquastic adiabatic algorithms with polynomial overhead. Here we analyze diffusion Monte Carlo algorithms. We argue that, based on differences between L1 and L2 normalized states, these algorithms suffer from certain obstructions preventing them from efficiently simulating stoquastic adiabatic evolution in generality. In practice however, we obtain good performance by introducing a method that we call Substochastic Monte Carlo. In fact, our simulations are good classical optimization algorithms in their own right, competitive with the best previously known heuristic solvers for MAX-k -SAT at k =2 ,3 ,4 .
Random Numbers and Monte Carlo Methods
Scherer, Philipp O. J.
Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.
CosmoPMC: Cosmology Population Monte Carlo
Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren
2011-01-01
We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.
Shell model the Monte Carlo way
Energy Technology Data Exchange (ETDEWEB)
Ormand, W.E.
1995-03-01
The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.
Monte Carlo strategies in scientific computing
Liu, Jun S
2008-01-01
This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...
Studying the information content of TMDs using Monte Carlo generators
Energy Technology Data Exchange (ETDEWEB)
Avakian, H. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Matevosyan, H. [The Univ. of Adelaide, Adelaide (Australia); Pasquini, B. [Univ. of Pavia, Pavia (Italy); Schweitzer, P. [Univ. of Connecticut, Storrs, CT (United States)
2015-02-05
Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this topical review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications.
Institute of Scientific and Technical Information of China (English)
范学明; 陶辛未
2011-01-01
Monte-Carlo Method is the representative statistical method in stochastic analysis. This method has no restriction on the variation of random variables and the dimension of problems. And its solutions often be regarded as relative accurate solutions. Thus this method has been paid widely attention. Sampling is the base of Monte-Carlo method. The major sampling method are the traditional sampling method and the Latin-Hypercube sampling method in the current. The basic principle and sampling effectiveness for two sampling methods have been introduced. On this basis, an attempt algorithm on improve sampling effectiveness has been put forward. This algorithm is simple and effective, and finally the calculation efficiency of Monte-Carlo method be improved. The algorithm's effectiveness is approved by theoretically derived and numerical examples.%Monte-Carlo法是随机分析中统计型方法的典型方法,由于该法具有对随机量的变异性没有限制、对问题的维数不敏感以及其解答为相对精确解等优点得到广泛的关注.而样本生成是Monte-Carlo法随机分析的基础.目前Monte-Carlo法的主要抽样方法有传统抽样法和Latin-Hypercube抽样法.介绍了以上两种抽样方法的基本原理并分析了方法生成样本的有效性,并在此基础上做了进一步提高样本有效性的尝试.该做法简单高效,从而最终提高Monte-Carlo法随机分析的计算效率.通过理论分析和数值算例验证了该做法的有效性.
On adaptive resampling strategies for sequential Monte Carlo methods
Del Moral, Pierre; Doucet, Arnaud; Jasra, Ajay
2012-01-01
Sequential Monte Carlo (SMC) methods are a class of techniques to sample approximately from any sequence of probability distributions using a combination of importance sampling and resampling steps. This paper is concerned with the convergence analysis of a class of SMC methods where the times at which resampling occurs are computed online using criteria such as the effective sample size. This is a popular approach amongst practitioners but there are very few convergence results available for...
Probabilistic fire simulator - Monte Carlo simulation tool for fire scenarios
Energy Technology Data Exchange (ETDEWEB)
Hostikka, S.; Keski-Rahkonen, O. [VTT Building and Transport, Espoo (Finland)
2002-11-01
Risk analysis tool is developed for computing of the distributions of fire model output variables. The tool, called Probabilistic Fire Simulator, combines Monte Carlo simulation and CFAST two-zone fire model. In this work, it is used to calculate failure probability of redundant cables and fire detector activation times in a cable tunnel fire. Sensitivity of the output variables to the input variables is calculated in terms of the rank order correlations. (orig.)
Monte carlo simulations of organic photovoltaics.
Groves, Chris; Greenham, Neil C
2014-01-01
Monte Carlo simulations are a valuable tool to model the generation, separation, and collection of charges in organic photovoltaics where charges move by hopping in a complex nanostructure and Coulomb interactions between charge carriers are important. We review the Monte Carlo techniques that have been applied to this problem, and describe the results of simulations of the various recombination processes that limit device performance. We show how these processes are influenced by the local physical and energetic structure of the material, providing information that is useful for design of efficient photovoltaic systems.
Monte Carlo dose distributions for radiosurgery
Energy Technology Data Exchange (ETDEWEB)
Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica; Sanchez-Doblado, F. [Sevilla Univ. (Spain). Dept. Fisiologia Medica y Biofisica]|[Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Nunez, L. [Clinica Puerta de Hierro, Madrid (Spain). Servicio de Radiofisica; Arrans, R.; Sanchez-Calzado, J.A.; Errazquin, L. [Hospital Univ. Virgen Macarena, Sevilla (Spain). Servicio de Oncologia Radioterapica; Sanchez-Nieto, B. [Royal Marsden NHS Trust (United Kingdom). Joint Dept. of Physics]|[Inst. of Cancer Research, Sutton, Surrey (United Kingdom)
2001-07-01
The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)
The Rational Hybrid Monte Carlo Algorithm
Clark, M A
2006-01-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
The Rational Hybrid Monte Carlo algorithm
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Monte Carlo Hamiltonian：Linear Potentials
Institute of Scientific and Technical Information of China (English)
LUOXiang－Qian; HelmutKROEGER; 等
2002-01-01
We further study the validity of the Monte Carlo Hamiltonian method .The advantage of the method,in comparison with the standard Monte Carlo Lagrangian approach,is its capability to study the excited states.We consider two quantum mechanical models:a symmetric one V(x)=/x/2;and an asymmetric one V(x)==∞,for x<0 and V(x)=2,for x≥0.The results for the spectrum,wave functions and thermodynamical observables are in agreement with the analytical or Runge-Kutta calculations.
Parallel Markov chain Monte Carlo simulations.
Ren, Ruichao; Orkoulas, G
2007-06-07
With strict detailed balance, parallel Monte Carlo simulation through domain decomposition cannot be validated with conventional Markov chain theory, which describes an intrinsically serial stochastic process. In this work, the parallel version of Markov chain theory and its role in accelerating Monte Carlo simulations via cluster computing is explored. It is shown that sequential updating is the key to improving efficiency in parallel simulations through domain decomposition. A parallel scheme is proposed to reduce interprocessor communication or synchronization, which slows down parallel simulation with increasing number of processors. Parallel simulation results for the two-dimensional lattice gas model show substantial reduction of simulation time for systems of moderate and large size.
Monte Carlo simulations of Protein Adsorption
Sharma, Sumit; Kumar, Sanat K.; Belfort, Georges
2008-03-01
Amyloidogenic diseases, such as, Alzheimer's are caused by adsorption and aggregation of partially unfolded proteins. Adsorption of proteins is a concern in design of biomedical devices, such as dialysis membranes. Protein adsorption is often accompanied by conformational rearrangements in protein molecules. Such conformational rearrangements are thought to affect many properties of adsorbed protein molecules such as their adhesion strength to the surface, biological activity, and aggregation tendency. It has been experimentally shown that many naturally occurring proteins, upon adsorption to hydrophobic surfaces, undergo a helix to sheet or random coil secondary structural rearrangement. However, to better understand the equilibrium structural complexities of this phenomenon, we have performed Monte Carlo (MC) simulations of adsorption of a four helix bundle, modeled as a lattice protein, and studied the adsorption behavior and equilibrium protein conformations at different temperatures and degrees of surface hydrophobicity. To study the free energy and entropic effects on adsorption, Canonical ensemble MC simulations have been combined with Weighted Histogram Analysis Method(WHAM). Conformational transitions of proteins on surfaces will be discussed as a function of surface hydrophobicity and compared to analogous bulk transitions.
Diffusion Monte Carlo in internal coordinates.
Petit, Andrew S; McCoy, Anne B
2013-08-15
An internal coordinate extension of diffusion Monte Carlo (DMC) is described as a first step toward a generalized reduced-dimensional DMC approach. The method places no constraints on the choice of internal coordinates other than the requirement that they all be independent. Using H(3)(+) and its isotopologues as model systems, the methodology is shown to be capable of successfully describing the ground state properties of molecules that undergo large amplitude, zero-point vibrational motions. Combining the approach developed here with the fixed-node approximation allows vibrationally excited states to be treated. Analysis of the ground state probability distribution is shown to provide important insights into the set of internal coordinates that are less strongly coupled and therefore more suitable for use as the nodal coordinates for the fixed-node DMC calculations. In particular, the curvilinear normal mode coordinates are found to provide reasonable nodal surfaces for the fundamentals of H(2)D(+) and D(2)H(+) despite both molecules being highly fluxional.
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
Finding Planet Nine: a Monte Carlo approach
Marcos, C de la Fuente
2016-01-01
Planet Nine is a hypothetical planet located well beyond Pluto that has been proposed in an attempt to explain the observed clustering in physical space of the perihelia of six extreme trans-Neptunian objects or ETNOs. The predicted approximate values of its orbital elements include a semimajor axis of 700 au, an eccentricity of 0.6, an inclination of 30 degrees, and an argument of perihelion of 150 degrees. Searching for this putative planet is already under way. Here, we use a Monte Carlo approach to create a synthetic population of Planet Nine orbits and study its visibility statistically in terms of various parameters and focusing on the aphelion configuration. Our analysis shows that, if Planet Nine exists and is at aphelion, it might be found projected against one out of four specific areas in the sky. Each area is linked to a particular value of the longitude of the ascending node and two of them are compatible with an apsidal antialignment scenario. In addition and after studying the current statistic...
Atomistic Monte Carlo Simulation of Lipid Membranes
Directory of Open Access Journals (Sweden)
Daniel Wüstner
2014-01-01
Full Text Available Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC simulation of lipid membranes. We provide an introduction into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches. We use our recently devised chain breakage/closure (CBC local move set in the bond-/torsion angle space with the constant-bond-length approximation (CBLA for the phospholipid dipalmitoylphosphatidylcholine (DPPC. We demonstrate rapid conformational equilibration for a single DPPC molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss the potential of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol.
Heinrich, Josué Miguel; Niizawa, Ignacio; Botta, Fausto Adrián; Trombert, Alejandro Raúl; Irazoqui, Horacio Antonio
2012-01-01
In a previous study, we developed a methodology to assess the intrinsic optical properties governing the radiation field in algae suspensions. With these properties at our disposal, a Monte Carlo simulation program is developed and used in this study as a predictive autonomous program applied to the simulation of experiments that reproduce the common illumination conditions that are found in processes of large scale production of microalgae, especially when using open ponds such as raceway ponds. The simulation module is validated by comparing the results of experimental measurements made on artificially illuminated algal suspension with those predicted by the Monte Carlo program. This experiment deals with a situation that resembles that of an open pond or that of a raceway pond, except for the fact that for convenience, the experimental arrangement appears as if those reactors were turned upside down. It serves the purpose of assessing to what extent the scattering phenomena are important for the prediction of the spatial distribution of the radiant energy density. The simulation module developed can be applied to compute the local energy density inside photobioreactors with the goal to optimize its design and their operating conditions.
Subtle Monte Carlo Updates in Dense Molecular Systems
DEFF Research Database (Denmark)
Bottaro, Sandro; Boomsma, Wouter; Johansson, Kristoffer E.;
2012-01-01
Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce...... as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results...... a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule...
Monte Carlo Simulations of Neutron Oil well Logging Tools
Azcurra, M
2002-01-01
Monte Carlo simulations of simple neutron oil well logging tools into typical geological formations are presented.The simulated tools consist of both 14 MeV pulsed and continuous Am-Be neutron sources with time gated and continuous gamma ray detectors respectively.The geological formation consists of pure limestone with 15% absolute porosity in a wide range of oil saturation.The particle transport was performed with the Monte Carlo N-Particle Transport Code System, MCNP-4B.Several gamma ray spectra were obtained at the detector position that allow to perform composition analysis of the formation.In particular, the ratio C/O was analyzed as an indicator of oil saturation.Further calculations are proposed to simulate actual detector responses in order to contribute to understand the relation between the detector response with the formation composition
Fast sequential Monte Carlo methods for counting and optimization
Rubinstein, Reuven Y; Vaisman, Radislav
2013-01-01
A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the
Monte Carlo EM加速算法%Acceleration of Monte Carlo EM Algorithm
Institute of Scientific and Technical Information of China (English)
罗季
2008-01-01
EM算法是近年来常用的求后验众数的估计的一种数据增广算法,但由于求出其E步中积分的显示表达式有时很困难,甚至不可能,限制了其应用的广泛性.而Monte Carlo EM算法很好地解决了这个问题,将EM算法中E步的积分用Monte Carlo模拟来有效实现,使其适用性大大增强.但无论是EM算法,还是Monte Carlo EM算法,其收敛速度都是线性的,被缺损信息的倒数所控制,当缺损数据的比例很高时,收敛速度就非常缓慢.而Newton-Raphson算法在后验众数的附近具有二次收敛速率.本文提出Monte Carlo EM加速算法,将Monte Carlo EM算法与Newton-Raphson算法结合,既使得EM算法中的E步用Monte Carlo模拟得以实现,又证明了该算法在后验众数附近具有二次收敛速度.从而使其保留了Monte Carlo EM算法的优点,并改进了Monte Carlo EM算法的收敛速度.本文通过数值例子,将Monte Carlo EM加速算法的结果与EM算法、Monte Carlo EM算法的结果进行比较,进一步说明了Monte Carlo EM加速算法的优良性.
Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules
Lester, William A; Reynolds, PJ
1994-01-01
This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n
On the use of stochastic approximation Monte Carlo for Monte Carlo integration
Liang, Faming
2009-03-01
The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.
Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia
Energy Technology Data Exchange (ETDEWEB)
Granero Cabanero, D.
2015-07-01
The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)
Energy Technology Data Exchange (ETDEWEB)
Arreola V, G. [IPN, Escuela Superior de Fisica y Matematicas, Posgrado en Ciencias Fisicomatematicas, area en Ingenieria Nuclear, Unidad Profesional Adolfo Lopez Mateos, Edificio 9, Col. San Pedro Zacatenco, 07730 Mexico D. F. (Mexico); Vazquez R, R.; Guzman A, J. R., E-mail: energia.arreola.uam@gmail.com [Universidad Autonoma Metropolitana, Unidad Iztapalapa, Area de Ingenieria en Recursos Energeticos, Av. San Rafael Atlixco 186, Col. Vicentina, 09340 Mexico D. F. (Mexico)
2012-10-15
In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., {mu}{omicron}=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)
Monte Carlo Simulation as a Research Management Tool
Energy Technology Data Exchange (ETDEWEB)
Douglas, L. J.
1986-06-01
Monte Carlo simulation provides a research manager with a performance monitoring tool to supplement the standard schedule- and resource-based tools such as the Program Evaluation and Review Technique (PERT) and Critical Path Method (CPM). The value of the Monte Carlo simulation in a research environment is that it 1) provides a method for ranking competing processes, 2) couples technical improvements to the process economics, and 3) provides a mechanism to determine the value of research dollars. In this paper the Monte Carlo simulation approach is developed and applied to the evaluation of three competing processes for converting lignocellulosic biomass to ethanol. The technique is shown to be useful for ranking the processes and illustrating the importance of the timeframe of the analysis on the decision process. The results show that acid hydrolysis processes have higher potential for near-term application (2-5 years), while the enzymatic hydrolysis approach has an equal chance to be competitive in the long term (beyond 10 years).
Geometric Templates for Improved Tracking Performance in Monte Carlo Codes
Nease, Brian R.; Millman, David L.; Griesheimer, David P.; Gill, Daniel F.
2014-06-01
One of the most fundamental parts of a Monte Carlo code is its geometry kernel. This kernel not only affects particle tracking (i.e., run-time performance), but also shapes how users will input models and collect results for later analyses. A new framework based on geometric templates is proposed that optimizes performance (in terms of tracking speed and memory usage) and simplifies user input for large scale models. While some aspects of this approach currently exist in different Monte Carlo codes, the optimization aspect has not been investigated or applied. If Monte Carlo codes are to be realistically used for full core analysis and design, this type of optimization will be necessary. This paper describes the new approach and the implementation of two template types in MC21: a repeated ellipse template and a box template. Several different models are tested to highlight the performance gains that can be achieved using these templates. Though the exact gains are naturally problem dependent, results show that runtime and memory usage can be significantly reduced when using templates, even as problems reach realistic model sizes.
Noninvasive optical measurement of bone marrow lesions: a Monte Carlo study on visible human dataset
Su, Yu; Li, Ting
2016-03-01
Bone marrow is both the main hematopoietic and important immune organ. Bone marrow lesions (BMLs) may cause a series of severe complications and even myeloma. The traditional diagnosis of BMLs rely on mostly bone marrow biopsy/ puncture, and sometimes MRI, X-ray, and etc., which are either invasive and dangerous, or ionizing and costly. A diagnosis technology with advantages in noninvasive, safe, real-time continuous detection, and low cost is requested. Here we reported our preliminary exploration of feasibility verification of using near-infrared spectroscopy (NIRS) in clinical diagnosis of BMLs by Monte Carlo simulation study. We simulated and visualized the light propagation in the bone marrow quantitatively with a Monte Carlo simulation software for 3D voxelized media and Visible Chinese Human data set, which faithfully represents human anatomy. The results indicate that bone marrow actually has significant effects on light propagation. According to a sequence of simulation and data analysis, the optimal source-detector separation was suggested to be narrowed down to 2.8-3.2cm, at which separation the spatial sensitivity distribution of NIRS cover the most region of bone marrow with high signal-to-noise ratio. The display of the sources and detectors were optimized as well. This study investigated the light transport in spine addressing to the BMLs detection issue and reported the feasibility of NIRS detection of BMLs noninvasively in theory. The optimized probe design of the coming NIRS-based BMLs detector is also provided.
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Monte Carlo methods beyond detailed balance
Schram, Raoul D.; Barkema, Gerard T.
2015-01-01
Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying
A comparison of Monte Carlo generators
Golan, Tomasz
2014-01-01
A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and $\\pi^+$ two-dimensional energy vs cosine distribution.
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Monte Carlo Simulation of Counting Experiments.
Ogden, Philip M.
A computer program to perform a Monte Carlo simulation of counting experiments was written. The program was based on a mathematical derivation which started with counts in a time interval. The time interval was subdivided to form a binomial distribution with no two counts in the same subinterval. Then the number of subintervals was extended to…
Chen, Hsing-Ta; Cohen, Guy; Reichman, David R.
2017-02-01
In this second paper of a two part series, we present extensive benchmark results for two different inchworm Monte Carlo expansions for the spin-boson model. Our results are compared to previously developed numerically exact approaches for this problem. A detailed discussion of convergence and error propagation is presented. Our results and analysis allow for an understanding of the benefits and drawbacks of inchworm Monte Carlo compared to other approaches for exact real-time non-adiabatic quantum dynamics.
Chen, Hsing-Ta; Reichman, David R
2016-01-01
In this second paper of a two part series, we present extensive benchmark results for two different inchworm Monte Carlo expansions for the spin-boson model. Our results are compared to previously developed numerically exact approaches for this problem. A detailed discussion of convergence and error propagation is presented. Our results and analysis allow for an understanding of the benefits and drawbacks of inchworm Monte Carlo compared to other approaches for exact real-time non-adiabatic quantum dynamics.
Evaluation of atomic electron binding energies for Monte Carlo particle transport
Pia, Maria Grazia; Batic, Matej; Begalli, Marcia; Kim, Chan Hyeong; Quintieri, Lina; Saracco, Paolo
2011-01-01
A survey of atomic binding energies used by general purpose Monte Carlo systems is reported. Various compilations of these parameters have been evaluated; their accuracy is estimated with respect to experimental data. Their effects on physics quantities relevant to Monte Carlo particle transport are highlighted: X-ray fluorescence emission, electron and proton ionization cross sections, and Doppler broadening in Compton scattering. The effects due to different binding energies are quantified with respect to experimental data. The results of the analysis provide quantitative ground for the selection of binding energies to optimize the accuracy of Monte Carlo simulation in experimental use cases. Recommendations on software design dealing with these parameters and on the improvement of data libraries for Monte Carlo simulation are discussed.
Composite sequential Monte Carlo test for post-market vaccine safety surveillance.
Silva, Ivair R
2016-04-30
Group sequential hypothesis testing is now widely used to analyze prospective data. If Monte Carlo simulation is used to construct the signaling threshold, the challenge is how to manage the type I error probability for each one of the multiple tests without losing control on the overall significance level. This paper introduces a valid method for a true management of the alpha spending at each one of a sequence of Monte Carlo tests. The method also enables the use of a sequential simulation strategy for each Monte Carlo test, which is useful for saving computational execution time. Thus, the proposed procedure allows for sequential Monte Carlo test in sequential analysis, and this is the reason that it is called 'composite sequential' test. An upper bound for the potential power losses from the proposed method is deduced. The composite sequential design is illustrated through an application for post-market vaccine safety surveillance data.
2D vs 3D gamma analysis: Establishment of comparable clinical action limits
Directory of Open Access Journals (Sweden)
Kiley B Pulliam
2014-03-01
Full Text Available Purpose: As clinics begin to use 3D metrics for intensity-modulated radiation therapy (IMRT quality assurance; these metrics will often produce results different from those produced by their 2D counterparts. 3D and 2D gamma analyses would be expected to produce different values, because of the different search space available. We compared the results of 2D and 3D gamma analysis (where both datasets were generated the same way for clinical treatment plans. Methods: 50 IMRT plans were selected from our database and recalculated using Monte Carlo. Treatment planning system-calculated (“evaluated” and Monte Carlo-recalculated (“reference” dose distributions were compared using 2D and 3D gamma analysis. This analysis was performed using a variety of dose-difference (5%, 3%, 2%, and 1% and distance-to-agreement (5, 3, 2, and 1 mm acceptance criteria, low-dose thresholds (5%, 10%, and 15% of the prescription dose, and data grid sizes (1.0, 1.5, and 3.0 mm. Each comparison was evaluated to determine the average 2D and 3D gamma and percentage of pixels passing gamma.Results: Average gamma and percentage of passing pixels for each acceptance criterion demonstrated better agreement for 3D than for 2D analysis for every plan comparison. Average difference in the percentage of passing pixels between the 2D and 3D analyses with no low-dose threshold ranged from 0.9% to 2.1%. Similarly, using a low-dose threshold resulted in a differences ranging from 0.8% to 1.5%. No appreciable differences in gamma with changes in the data density (constant difference: 0.8% for 2D vs. 3D were observed.Conclusion: We found that 3D gamma analysis resulted in up to 2.9% more pixels passing than 2D analysis. Factors such as inherent dosimeter differences may be an important additional consideration to the extra dimension of available data that was evaluated in this study.------------------------------------Cite this article as
Quasimodes instability analysis of uncertain asymmetric rotor system based on 3D solid element model
Zuo, Yanfei; Wang, Jianjun; Ma, Weimeng
2017-03-01
Uncertainties are considered in the equation of motion of an asymmetric rotor system. Based on Hill's determinant method, quasimodes stability analysis with uncertain parameters is used to get stochastic boundaries of unstable regions. Firstly, A 3D finite element rotor model was built in rotating frame with four parameterized coefficients, which is assumed as random parameters representing the uncertainties existing in the rotor system. Then the influences of uncertain coefficients on the distribution of the unstable region boundaries are analyzed. The results show that uncertain parameters have various influences on the size, boundary and number of unstable regions. At last, the statistic results of the minimum and maximum spin speeds of unstable regions were got by Monte Carlo simulation. The used method is suitable for real engineering rotor system, because arbitrary configuration of rotors can be modeled by 3D finite element.
Yang, Ye; Soyemi, Olusola O.; Landry, Michelle R.; Soller, Babs R.
2005-01-01
The influence of fat thickness on the diffuse reflectance spectra of muscle in the near infrared (NIR) region is studied by Monte Carlo simulations of a two-layer structure and with phantom experiments. A polynomial relationship was established between the fat thickness and the detected diffuse reflectance. The influence of a range of optical coefficients (absorption and reduced scattering) for fat and muscle over the known range of human physiological values was also investigated. Subject-to-subject variation in the fat optical coefficients and thickness can be ignored if the fat thickness is less than 5 mm. A method was proposed to correct the fat thickness influence. c2005 Optical Society of America.
Kim, J.; Park, J.; Kim, J.; Kim, D. W.; Yun, S.; Lim, C. H.; Kim, H. K.
2016-11-01
For the purpose of designing an x-ray detector system for cargo container inspection, we have investigated the energy-absorption signal and noise in CdWO4 detectors for megavoltage x-ray photons. We describe the signal and noise measures, such as quantum efficiency, average energy absorption, Swank noise factor, and detective quantum efficiency (DQE), in terms of energy moments of absorbed energy distributions (AEDs) in a detector. The AED is determined by using a Monte Carlo simulation. The results show that the signal-related measures increase with detector thickness. However, the improvement of Swank noise factor with increasing thickness is weak, and this energy-absorption noise characteristic dominates the DQE performance. The energy-absorption noise mainly limits the signal-to-noise performance of CdWO4 detectors operated at megavoltage x-ray beam.
Energy Technology Data Exchange (ETDEWEB)
Hardin, M; Elson, H; Lamba, M [University of Cincinnati, Cincinnati, OH (United States); Wolf, E [Precision Radiotherapy, West Chester, OH (United States); Warnick, R [UC Health Physicians, West Chester, OH (United States)
2014-06-01
Purpose: To quantify the clinically observed dose enhancement adjacent to cranial titanium fixation plates during post-operative radiotherapy. Methods: Irradiation of a titanium burr hole cover was simulated using Monte Carlo code MCNPX for a 6 MV photon spectrum to investigate backscatter dose enhancement due to increased production of secondary electrons within the titanium plate. The simulated plate was placed 3 mm deep in a water phantom, and dose deposition was tallied for 0.2 mm thick cells adjacent to the entrance and exit sides of the plate. These results were compared to a simulation excluding the presence of the titanium to calculate relative dose enhancement on the entrance and exit sides of the plate. To verify simulated results, two titanium burr hole covers (Synthes, Inc. and Biomet, Inc.) were irradiated with 6 MV photons in a solid water phantom containing GafChromic MD-55 film. The phantom was irradiated on a Varian 21EX linear accelerator at multiple gantry angles (0–180 degrees) to analyze the angular dependence of the backscattered radiation. Relative dose enhancement was quantified using computer software. Results: Monte Carlo simulations indicate a relative difference of 26.4% and 7.1% on the entrance and exit sides of the plate respectively. Film dosimetry results using a similar geometry indicate a relative difference of 13% and -10% on the entrance and exit sides of the plate respectively. Relative dose enhancement on the entrance side of the plate decreased with increasing gantry angle from 0 to 180 degrees. Conclusion: Film and simulation results demonstrate an increase in dose to structures immediately adjacent to cranial titanium fixation plates. Increased beam obliquity has shown to alleviate dose enhancement to some extent. These results are consistent with clinically observed effects.
Monte Carlo radiation transport in external beam radiotherapy
Çeçen, Yiğit
2013-01-01
The use of Monte Carlo in radiation transport is an effective way to predict absorbed dose distributions. Monte Carlo modeling has contributed to a better understanding of photon and electron transport by radiotherapy physicists. The aim of this review is to introduce Monte Carlo as a powerful radiation transport tool. In this review, photon and electron transport algorithms for Monte Carlo techniques are investigated and a clinical linear accelerator model is studied for external beam radiot...
Energy Technology Data Exchange (ETDEWEB)
Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)
2016-06-15
Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.
Can we do better than Hybrid Monte Carlo in lattice QCD?
Energy Technology Data Exchange (ETDEWEB)
Berbenni-Bitsch, M.E. [Kaiserslautern Univ. (Germany). Fachbereich Physik; Gottlob, A.P. [Kaiserslautern Univ. (Germany). Fachbereich Physik; Meyer, S. [Kaiserslautern Univ. (Germany). Fachbereich Physik; Puetz, M. [Kaiserslautern Univ. (Germany). Fachbereich Physik
1996-02-01
The Hybrid Monte Carlo algorithm for the simulation of QCD with dynamical staggered fermions is compared with Kramers equation algorithm. We find substantially different autocorrelation times for local and nonlocal observables. The calculations have been performed on the parallel computer CRAY T3D. (orig.).
Ground bounce tracking for landmine detection using a sequential Monte Carlo method
Tang, Li; Torrione, Peter A.; Eldeniz, Cihat; Collins, Leslie M.
2007-04-01
A Sequential Monte Carlo (SMC) method is proposed to locate the ground bounce (GB) positions in 3D data collected by ground penetrating radar (GPR) system. The algorithm is verified utilizing real data and improved landmine detection performance is achieved compared with three other GB trackers.
Energy Technology Data Exchange (ETDEWEB)
Guan, Fada; Peeler, Christopher; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Mohan, Radhe; Titt, Uwe, E-mail: UTitt@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Bronk, Lawrence [Department of Experimental Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States); Geng, Changran [Department of Nuclear Science and Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China and Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Grosshans, David [Department of Experimental Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 and Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Boulevard, Houston, Texas 77030 (United States)
2015-11-15
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to
Validation of the Monte Carlo code MCNP-DSP
Energy Technology Data Exchange (ETDEWEB)
Valentine, T.E.; Mihalczo, J.T. [Oak Ridge National Lab., TN (United States)
1996-09-12
Several calculations were performed to validate MCNP-DSP, which is a Monte Carlo code that calculates all the time and frequency analysis parameters associated with the {sup 252}Cf-source-driven time and frequency analysis method. The frequency analysis parameters are obtained in two ways: directly by Fourier transforming the detector responses and indirectly by taking the Fourier transform of the autocorrelation and cross-correlation functions. The direct and indirect Fourier processing methods were shown to produce the same frequency spectra and convergence, thus verifying the way to obtain the frequency analysis parameters from the time sequences of detector pulses. (Author).
An enhanced Monte Carlo outlier detection method.
Zhang, Liangxiao; Li, Peiwu; Mao, Jin; Ma, Fei; Ding, Xiaoxia; Zhang, Qi
2015-09-30
Outlier detection is crucial in building a highly predictive model. In this study, we proposed an enhanced Monte Carlo outlier detection method by establishing cross-prediction models based on determinate normal samples and analyzing the distribution of prediction errors individually for dubious samples. One simulated and three real datasets were used to illustrate and validate the performance of our method, and the results indicated that this method outperformed Monte Carlo outlier detection in outlier diagnosis. After these outliers were removed, the value of validation by Kovats retention indices and the root mean square error of prediction decreased from 3.195 to 1.655, and the average cross-validation prediction error decreased from 2.0341 to 1.2780. This method helps establish a good model by eliminating outliers. © 2015 Wiley Periodicals, Inc.
Multilevel Monte Carlo Approaches for Numerical Homogenization
Efendiev, Yalchin R.
2015-10-01
In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.
Monte Carlo study of real time dynamics
Alexandru, Andrei; Bedaque, Paulo F; Vartak, Sohan; Warrington, Neill C
2016-01-01
Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from highly oscillatory phase of the path integral. In this letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and in principle applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.
Hybrid Monte Carlo with Chaotic Mixing
Kadakia, Nirag
2016-01-01
We propose a hybrid Monte Carlo (HMC) technique applicable to high-dimensional multivariate normal distributions that effectively samples along chaotic trajectories. The method is predicated on the freedom of choice of the HMC momentum distribution, and due to its mixing properties, exhibits sample-to-sample autocorrelations that decay far faster than those in the traditional hybrid Monte Carlo algorithm. We test the methods on distributions of varying correlation structure, finding that the proposed technique produces superior covariance estimates, is less reliant on step-size tuning, and can even function with sparse or no momentum re-sampling. The method presented here is promising for more general distributions, such as those that arise in Bayesian learning of artificial neural networks and in the state and parameter estimation of dynamical systems.
Composite biasing in Monte Carlo radiative transfer
Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf
2016-01-01
Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...
Directory of Open Access Journals (Sweden)
Juan A Martínez-Velasco
2008-06-01
Full Text Available An accurate calculation of lightning overvoltages is an important issue for the analysis and design of overhead transmission lines. The different parts of a transmission line that are involved in lightning calculations must be represented taking into account the frequency ranges of transients associated to lightning. In addition, the procedures to be used in these calculations must be developed considering the random nature of lightning phenomena. Several simulation tools have been used to estimate the lightning performance of transmission lines. The most popular approaches are those based on a time-domain simulation technique for which adequate procedures and transmission line models have to be developed. This paper presents a summary of the computational efforts made by the authors for the development and implementation in an EMTP-like tool of a Monte Carlo procedure, as well as the models of some transmission line components, aimed at analyzing the lightning performance of transmission lines. An actual test line is used to illustrate the scope of this procedure and the type of studies that can be performed.El cálculo riguroso de sobretensiones de origen atmosférico es un aspecto importante en el análisis y diseño de líneas aéreas de transmisión. Las diferentes partes de una línea que están involucradas en las sobretensiones causadas por el rayo deben ser representadas teniendo en cuenta el rango de frecuencia de los transientes causados por el impacto de una descarga atmosférica. Por otro lado, los procedimientos a emplear en el cálculo de sobretensiones deben ser desarrollados teniendo en cuenta la naturaleza aleatoria del rayo. Varias herramientas de cálculo han sido empleadas hasta la fecha para estimar el comportamiento de líneas aéreas de transmisión frente al rayo. Los procedimientos más utilizados emplean una técnica basada en el dominio del tiempo para la que se han de desarrollar y aplicar modelos adecuados de las
Handbook of Markov chain Monte Carlo
Brooks, Steve
2011-01-01
""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.
Accelerated Monte Carlo by Embedded Cluster Dynamics
Brower, R. C.; Gross, N. A.; Moriarty, K. J. M.
1991-07-01
We present an overview of the new methods for embedding Ising spins in continuous fields to achieve accelerated cluster Monte Carlo algorithms. The methods of Brower and Tamayo and Wolff are summarized and variations are suggested for the O( N) models based on multiple embedded Z2 spin components and/or correlated projections. Topological features are discussed for the XY model and numerical simulations presented for d=2, d=3 and mean field theory lattices.
Inhomogeneous Monte Carlo simulations of dermoscopic spectroscopy
Gareau, Daniel S.; Li, Ting; Jacques, Steven; Krueger, James
2012-03-01
Clinical skin-lesion diagnosis uses dermoscopy: 10X epiluminescence microscopy. Skin appearance ranges from black to white with shades of blue, red, gray and orange. Color is an important diagnostic criteria for diseases including melanoma. Melanin and blood content and distribution impact the diffuse spectral remittance (300-1000nm). Skin layers: immersion medium, stratum corneum, spinous epidermis, basal epidermis and dermis as well as laterally asymmetric features (eg. melanocytic invasion) were modeled in an inhomogeneous Monte Carlo model.
Proton therapy Monte Carlo SRNA-VOX code
Directory of Open Access Journals (Sweden)
Ilić Radovan D.
2012-01-01
Full Text Available The most powerful feature of the Monte Carlo method is the possibility of simulating all individual particle interactions in three dimensions and performing numerical experiments with a preset error. These facts were the motivation behind the development of a general-purpose Monte Carlo SRNA program for proton transport simulation in technical systems described by standard geometrical forms (plane, sphere, cone, cylinder, cube. Some of the possible applications of the SRNA program are: (a a general code for proton transport modeling, (b design of accelerator-driven systems, (c simulation of proton scattering and degrading shapes and composition, (d research on proton detectors; and (e radiation protection at accelerator installations. This wide range of possible applications of the program demands the development of various versions of SRNA-VOX codes for proton transport modeling in voxelized geometries and has, finally, resulted in the ISTAR package for the calculation of deposited energy distribution in patients on the basis of CT data in radiotherapy. All of the said codes are capable of using 3-D proton sources with an arbitrary energy spectrum in an interval of 100 keV to 250 MeV.
A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.
Magnetic properties of double perovskite Sr2RuHoO6: Monte Carlo Simulation
Nid-bahami, A.; El Kenz, A.; Benyoussef, A.; Bahmad, L.; Hamedoun, M.; El Moussaoui, H.
2016-11-01
In this paper, we have studied the double perovskite complex Sr2RuHoO6 (SRHO) using the Mean-Field Approximation (MFA) and Monte Carlo Simulation (MCS). Firstly, we study the ground state of the phase diagrams depending on the exchange couplings and the crystal fields, on the other hand the magnetic properties has been studied. The obtained results by MFA were compared with those obtained using a MCS. Secondly, we have presented the results for finite sizes analysis, of the magnetization and the susceptibility as a function of reduced temperature. Finally, we obtain the critical reduced temperature and critical values of the exponents υ = 0 . 602 ± 0 . 011 , γ = 1 . 179 ± 0 . 022 and β = 0 . 296 ± 0 . 018 which these values are nearest to that of 3D Ising model (υ = 0 ṡ 632 , γ = 1 ṡ 23 and β = 0 ṡ 325).
An introduction to Monte Carlo methods
Walter, J.-C.; Barkema, G. T.
2015-01-01
Monte Carlo simulations are methods for simulating statistical systems. The aim is to generate a representative ensemble of configurations to access thermodynamical quantities without the need to solve the system analytically or to perform an exact enumeration. The main principles of Monte Carlo simulations are ergodicity and detailed balance. The Ising model is a lattice spin system with nearest neighbor interactions that is appropriate to illustrate different examples of Monte Carlo simulations. It displays a second order phase transition between disordered (high temperature) and ordered (low temperature) phases, leading to different strategies of simulations. The Metropolis algorithm and the Glauber dynamics are efficient at high temperature. Close to the critical temperature, where the spins display long range correlations, cluster algorithms are more efficient. We introduce the rejection free (or continuous time) algorithm and describe in details an interesting alternative representation of the Ising model using graphs instead of spins with the so-called Worm algorithm. We conclude with an important discussion of the dynamical effects such as thermalization and correlation time.
Energy Technology Data Exchange (ETDEWEB)
Wang, Ping, E-mail: pingwang@xidian.edu.cn [State Key Laboratory of Integrated Service Networks, School of Telecommunications Engineering, Xidian University, Xi’an 710071 (China); School of Physics and Optoelectronic Engineering, Xidian University, Xi’an 710071 (China); Hu, Linlin; Shan, Xuefei [State Key Laboratory of Integrated Service Networks, School of Telecommunications Engineering, Xidian University, Xi’an 710071 (China); Yang, Yintang [Key Laboratory of the Ministry of Education for Wide Band-Gap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi’an 710071 (China); Song, Jiuxu; Guo, Lixin [School of Physics and Optoelectronic Engineering, Xidian University, Xi’an 710071 (China); Zhang, Zhiyong [School of Information Science and Technology, Northwest University, Xi’an 710127 (China)
2015-01-15
Transient characteristics of wurtzite Zn{sub 1−x}Mg{sub x}O are investigated using a three-valley Ensemble Monte Carlo model verified by the agreement between the simulated low-field mobility and the experiment result reported. The electronic structures are obtained by first principles calculations with density functional theory. The results show that the peak electron drift velocities of Zn{sub 1−x}Mg{sub x}O (x = 11.1%, 16.7%, 19.4%, 25%) at 3000 kV/cm are 3.735 × 10{sup 7}, 2.133 × 10{sup 7}, 1.889 × 10{sup 7}, 1.295 × 10{sup 7} cm/s, respectively. With the increase of Mg concentration, a higher electric field is required for the onset of velocity overshoot. When the applied field exceeds 2000 kV/cm and 2500 kV/cm, a phenomena of velocity undershoot is observed in Zn{sub 0.889}Mg{sub 0.111}O and Zn{sub 0.833}Mg{sub 0.167}O respectively, while it is not observed for Zn{sub 0.806}Mg{sub 0.194}O and Zn{sub 0.75}Mg{sub 0.25}O even at 3000 kV/cm which is especially important for high frequency devices.
Russell, Travis; Edwards, Brian; Khomami, Bamin
2012-02-01
Experimental SANS research displays a significant concentration dependence of the Flory-Huggins (χ) interaction parameter in isotopic polymer blends. At the extremes of the deuterated polymer concentration (φD 0.8), χ is shown to exhibit a greater than fourfold increase over its value at φD = 0.5. However, despite numerous attempts to theoretically describe the nature of this phenomenon, consensus is still lacking regarding the mechanisms at work in this system. This study uses free-space, spatially discretized Monte Carlo simulations to investigate the χ composition dependence of PE-dPE blends. Initial simulations are run on simple Lennard-Jones fluids to display the capability of the simulation method to track local concentration and energy across the discretized space as well as to investigate the concentration dependence of the radial distribution function, g(r), and structure factor, S(k). After which, MC simulations are performed on the PE-dPE system with varying φD. Both local and average system energies are tracked in addition to g(r) and S(k). The Flory-Huggins interaction parameter is then calculated using the Random Phase Approximation.
Molinelli, S.; Mairani, A.; Mirandola, A.; Vilches Freixas, G.; Tessonnier, T.; Giordanengo, S.; Parodi, K.; Ciocca, M.; Orecchia, R.
2013-06-01
During one year of clinical activity at the Italian National Center for Oncological Hadron Therapy 31 patients were treated with actively scanned proton beams. Results of patient-specific quality assurance procedures are presented here which assess the accuracy of a three-dimensional dose verification technique with the simultaneous use of multiple small-volume ionization chambers. To investigate critical cases of major deviations between treatment planning system (TPS) calculated and measured data points, a Monte Carlo (MC) simulation tool was implemented for plan verification in water. Starting from MC results, the impact of dose calculation, dose delivery and measurement set-up uncertainties on plan verification results was analyzed. All resulting patient-specific quality checks were within the acceptance threshold, which was set at 5% for both mean deviation between measured and calculated doses and standard deviation. The mean deviation between TPS dose calculation and measurement was less than ±3% in 86% of the cases. When all three sources of uncertainty were accounted for, simulated data sets showed a high level of agreement, with mean and maximum absolute deviation lower than 2.5% and 5%, respectively.
Energy Technology Data Exchange (ETDEWEB)
Pignol, J.-P. [Toronto-Sunnybrook Regional Cancer Centre, Radiotherapy Dept., Toronto, Ontario (Canada); Slabbert, J. [National Accelerator Centre, Faure (South Africa)
2001-02-01
Fast neutrons (FN) have a higher radio-biological effectiveness (RBE) compared with photons, however the mechanism of this increase remains a controversial issue. RBE variations are seen among various FN facilities and at the same facility when different tissue depths or thicknesses of hardening filters are used. These variations lead to uncertainties in dose reporting as well as in the comparisons of clinical results. Besides radiobiology and microdosimetry, another powerful method for the characterization of FN beams is the calculation of total proton and heavy ion kerma spectra. FLUKA and MCNP Monte Carlo code were used to simulate these kerma spectra following a set of microdosimetry measurements performed at the National Accelerator Centre. The calculated spectra confirmed major classical statements: RBE increase is linked to both slow energy protons and alpha particles yielded by (n,{alpha}) reactions on carbon and oxygen nuclei. The slow energy protons are produced by neutrons having an energy between 10 keV and 10 MeV, while the alpha particles are produced by neutrons having an energy between 10 keV and 15 MeV. Looking at the heavy ion kerma from <15 MeV and the proton kerma from neutrons <10 MeV, it is possible to anticipate y* and RBE trends. (author)
Nishizawa, Manami; Nishizawa, Kazuhisa
2002-12-01
To study the mechanisms for local evolutionary changes in DNA sequences involving slippage-type insertions and deletions, an alignment approach is explored that can consider the posterior probabilities of alignment models. Various patterns of insertion and deletion that can link the ancestor and descendant sequences are proposed and evaluated by simulation and compared by the Markov chain Monte Carlo (MCMC) method. Analyses of pseudogenes reveal that the introduction of the parameters that control the probability of slippage-type events markedly augments the probability of the observed sequence evolution, arguing that a cryptic involvement of slippage occurrences is manifested as insertions and deletions of short nucleotide segments. Strikingly, approximately 80% of insertions in human pseudogenes and approximately 50% of insertions in murids pseudogenes are likely to be caused by the slippage-mediated process, as represented by BC in ABCD --> ABCBCD. We suggest that, in both human and murids, even very short repetitive motifs, such as CAGCAG, CACACA, and CCCC, have approximately 10- to 15-fold susceptibility to insertions and deletions, compared to nonrepetitive sequences. Our protocol, namely, indel-MCMC, thus seems to be a reasonable approach for statistical analyses of the early phase of microsatellite evolution.
Pignol, J P; Slabbert, J
2001-02-01
Fast neutrons (FN) have a higher radio-biological effectiveness (RBE) compared with photons, however the mechanism of this increase remains a controversial issue. RBE variations are seen among various FN facilities and at the same facility when different tissue depths or thicknesses of hardening filters are used. These variations lead to uncertainties in dose reporting as well as in the comparisons of clinical results. Besides radiobiology and microdosimetry, another powerful method for the characterization of FN beams is the calculation of total proton and heavy ion kerma spectra. FLUKA and MCNP Monte Carlo code were used to simulate these kerma spectra following a set of microdosimetry measurements performed at the National Accelerator Centre. The calculated spectra confirmed major classical statements: RBE increase is linked to both slow energy protons and alpha particles yielded by (n,alpha) reactions on carbon and oxygen nuclei. The slow energy protons are produced by neutrons having an energy between 10 keV and 10 MeV, while the alpha particles are produced by neutrons having an energy between 10 keV and 15 MeV. Looking at the heavy ion kerma from neutrons <10 MeV, it is possible to anticipate y* and RBE trends.
Probabilistic Assessments of the Plate Using Monte Carlo Simulation
Energy Technology Data Exchange (ETDEWEB)
Ismail, A E [Department of Mechanical Engineering, Faculty of Mechanical and Manufacturing Engineering, Universiti Tun Hussein Onn Malaysia, Batu Pahat, 86400 Johor (Malaysia); Ariffin, A K; Abdullah, S; Ghazali, M J, E-mail: kamal@eng.ukm.my, E-mail: shahrum@eng.ukm.my, E-mail: maryam@eng.ukm.my, E-mail: emran@uthm.edu.my [Department of Mechanical and Materials Engineering, Faculty of Engineering and Built Environment, Universiti Kebangsaan Malaysia, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)
2011-02-15
This paper presents the probabilistic analysis of the plate with a hole using several multiaxial high cycle fatigue criteria (MHFC). Dang Van, Sines, Crossland criteria were used and von Mises criterion was also considered for comparison purpose. Parametric finite element model of the plate was developed and several important random variable parameters were selected and Latin Hypercube Sampling Monte-Carlo Simulation (LHS-MCS) was used for probabilistic analysis tool. It was found that, different structural reliability and sensitivity factors were obtained using different failure criteria. According to the results multiaxial fatigue criteria are the most significant criteria need to be considered in assessing all the structural behavior especially under complex loadings.
Energy Technology Data Exchange (ETDEWEB)
Farah, Jad
2011-10-06
To optimize the monitoring of female workers using in vivo spectrometry measurements, it is necessary to correct the typical calibration coefficients obtained with the Livermore male physical phantom. To do so, numerical calibrations based on the use of Monte Carlo simulations combined with anthropomorphic 3D phantoms were used. Such computational calibrations require on the one hand the development of representative female phantoms of different size and morphologies and on the other hand rapid and reliable Monte Carlo calculations. A library of female torso models was hence developed by fitting the weight of internal organs and breasts according to the body height and to relevant plastic surgery recommendations. This library was next used to realize a numerical calibration of the AREVA NC La Hague in vivo counting installation. Moreover, the morphology-induced counting efficiency variations with energy were put into equation and recommendations were given to correct the typical calibration coefficients for any monitored female worker as a function of body height and breast size. Meanwhile, variance reduction techniques and geometry simplification operations were considered to accelerate simulations. Furthermore, to determine the activity mapping in the case of complex contaminations, a method that combines Monte Carlo simulations with in vivo measurements was developed. This method consists of realizing several spectrometry measurements with different detector positioning. Next, the contribution of each contaminated organ to the count is assessed from Monte Carlo calculations. The in vivo measurements realized at LEDI, CIEMAT and KIT have demonstrated the effectiveness of the method and highlighted the valuable contribution of Monte Carlo simulations for a more detailed analysis of spectrometry measurements. Thus, a more precise estimate of the activity distribution is given in the case of an internal contamination. (author)
Coherent Scattering Imaging Monte Carlo Simulation
Hassan, Laila Abdulgalil Rafik
Conventional mammography has poor contrast between healthy and cancerous tissues due to the small difference in attenuation properties. Coherent scatter potentially provides more information because interference of coherently scattered radiation depends on the average intermolecular spacing, and can be used to characterize tissue types. However, typical coherent scatter analysis techniques are not compatible with rapid low dose screening techniques. Coherent scatter slot scan imaging is a novel imaging technique which provides new information with higher contrast. In this work a simulation of coherent scatter was performed for slot scan imaging to assess its performance and provide system optimization. In coherent scatter imaging, the coherent scatter is exploited using a conventional slot scan mammography system with anti-scatter grids tilted at the characteristic angle of cancerous tissues. A Monte Carlo simulation was used to simulate the coherent scatter imaging. System optimization was performed across several parameters, including source voltage, tilt angle, grid distances, grid ratio, and shielding geometry. The contrast increased as the grid tilt angle increased beyond the characteristic angle for the modeled carcinoma. A grid tilt angle of 16 degrees yielded the highest contrast and signal to noise ratio (SNR). Also, contrast increased as the source voltage increased. Increasing grid ratio improved contrast at the expense of decreasing SNR. A grid ratio of 10:1 was sufficient to give a good contrast without reducing the intensity to a noise level. The optimal source to sample distance was determined to be such that the source should be located at the focal distance of the grid. A carcinoma lump of 0.5x0.5x0.5 cm3 in size was detectable which is reasonable considering the high noise due to the usage of relatively small number of incident photons for computational reasons. A further study is needed to study the effect of breast density and breast thickness
Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method
2002-01-01
This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.
Monte Carlo Simulation for the MAGIC-II System
Carmona, E; Moralejo, A; Vitale, V; Sobczynska, D; Haffke, M; Bigongiari, C; Otte, N; Cabras, G; De Maria, M; De Sabata, F
2007-01-01
Within the year 2007, MAGIC will be upgraded to a two telescope system at La Palma. Its main goal is to improve the sensitivity in the stereoscopic/coincident operational mode. At the same time it will lower the analysis threshold of the currently running single MAGIC telescope. Results from the Monte Carlo simulations of this system will be discussed. A comparison of the two telescope system with the performance of one single telescope will be shown in terms of sensitivity, angular resolution and energy resolution.
Validation of Phonon Physics in the CDMS Detector Monte Carlo
McCarthy, K A; Anderson, A J; Brandt, D; Brink, P L; Cabrera, B; Cherry, M; Silva, E Do Couto E; Cushman, P; Doughty, T; Figueroa-Feliciano, E; Kim, P; Mirabolfathi, N; Novak, L; Partridge, R; Pyle, M; Reisetter, A; Resch, R; Sadoulet, B; Serfass, B; Sundqvist, K M; Tomada, A
2011-01-01
The SuperCDMS collaboration is a dark matter search effort aimed at detecting the scattering of WIMP dark matter from nuclei in cryogenic germanium targets. The CDMS Detector Monte Carlo (CDMS-DMC) is a simulation tool aimed at achieving a deeper understanding of the performance of the SuperCDMS detectors and aiding the dark matter search analysis. We present results from validation of the phonon physics described in the CDMS-DMC and outline work towards utilizing it in future WIMP search analyses.
On adaptive resampling strategies for sequential Monte Carlo methods
Del Moral, Pierre; Jasra, Ajay; 10.3150/10-BEJ335
2012-01-01
Sequential Monte Carlo (SMC) methods are a class of techniques to sample approximately from any sequence of probability distributions using a combination of importance sampling and resampling steps. This paper is concerned with the convergence analysis of a class of SMC methods where the times at which resampling occurs are computed online using criteria such as the effective sample size. This is a popular approach amongst practitioners but there are very few convergence results available for these methods. By combining semigroup techniques with an original coupling argument, we obtain functional central limit theorems and uniform exponential concentration estimates for these algorithms.
AVATAR -- Automatic variance reduction in Monte Carlo calculations
Energy Technology Data Exchange (ETDEWEB)
Van Riper, K.A.; Urbatsch, T.J.; Soran, P.D. [and others
1997-05-01
AVATAR{trademark} (Automatic Variance And Time of Analysis Reduction), accessed through the graphical user interface application, Justine{trademark}, is a superset of MCNP{trademark} that automatically invokes THREEDANT{trademark} for a three-dimensional deterministic adjoint calculation on a mesh independent of the Monte Carlo geometry, calculates weight windows, and runs MCNP. Computational efficiency increases by a factor of 2 to 5 for a three-detector oil well logging tool model. Human efficiency increases dramatically, since AVATAR eliminates the need for deep intuition and hours of tedious handwork.
Computed radiography simulation using the Monte Carlo code MCNPX
Energy Technology Data Exchange (ETDEWEB)
Correa, S.C.A. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Centro Universitario Estadual da Zona Oeste (CCMAT)/UEZO, Av. Manuel Caldeira de Alvarenga, 1203, Campo Grande, 23070-200, Rio de Janeiro, RJ (Brazil); Souza, E.M. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Silva, A.X., E-mail: ademir@con.ufrj.b [PEN/COPPE-DNC/Poli CT, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil); Cassiano, D.H. [Instituto de Radioprotecao e Dosimetria/CNEN Av. Salvador Allende, s/n, Recreio, 22780-160, Rio de Janeiro, RJ (Brazil); Lopes, R.T. [Programa de Engenharia Nuclear/COPPE, Universidade Federal do Rio de Janeiro, Ilha do Fundao, Caixa Postal 68509, 21945-970, Rio de Janeiro, RJ (Brazil)
2010-09-15
Simulating X-ray images has been of great interest in recent years as it makes possible an analysis of how X-ray images are affected owing to relevant operating parameters. In this paper, a procedure for simulating computed radiographic images using the Monte Carlo code MCNPX is proposed. The sensitivity curve of the BaFBr image plate detector as well as the characteristic noise of a 16-bit computed radiography system were considered during the methodology's development. The results obtained confirm that the proposed procedure for simulating computed radiographic images is satisfactory, as it allows obtaining results comparable with experimental data.
Monte Carlo based radial shield design of typical PWR reactor
Energy Technology Data Exchange (ETDEWEB)
Gul, Anas; Khan, Rustam; Qureshi, M. Ayub; Azeem, Muhammad Waqar; Raza, S.A. [Pakistan Institute of Engineering and Applied Sciences, Islamabad (Pakistan). Dept. of Nuclear Engineering; Stummer, Thomas [Technische Univ. Wien (Austria). Atominst.
2016-11-15
Neutron and gamma flux and dose equivalent rate distribution are analysed in radial and shields of a typical PWR type reactor based on the Monte Carlo radiation transport computer code MCNP5. The ENDF/B-VI continuous energy cross-section library has been employed for the criticality and shielding analysis. The computed results are in good agreement with the reference results (maximum difference is less than 56 %). It implies that MCNP5 a good tool for accurate prediction of neutron and gamma flux and dose rates in radial shield around the core of PWR type reactors.
Finding organic vapors - a Monte Carlo approach
Vuollekoski, Henri; Boy, Michael; Kerminen, Veli-Matti; Kulmala, Markku
2010-05-01
drawbacks in accuracy, the inability to find diurnal variation and the lack of size resolution. Here, we aim to shed some light onto the problem by applying an ad hoc Monte Carlo algorithm to a well established aerosol dynamical model, the University of Helsinki Multicomponent Aerosol model (UHMA). By performing a side-by-side comparison with measurement data within the algorithm, this approach has the significant advantage of decreasing the amount of manual labor. But more importantly, by basing the comparison on particle number size distribution data - a quantity that can be quite reliably measured - the accuracy of the results is good.
住房抵押支持证券的风险分析及Monte-Carlo模拟%Analysis and Monte Carlo Simulation of Risk of Mortgage-Backed Security
Institute of Scientific and Technical Information of China (English)
张薇; 李然; 韩佳鸣; 李纯青
2015-01-01
According to the influence of the randomness of interest rate volatility and the existing active prepayment behavior in China on Mortgage-backed securities,this paper studies the cash flow changes in Mortgage-backed securities by using Monte Carlo simulation.Through the analysis of the factors influencing Mortgage-backed securities—interest rate,acceleration rate and the default rate,CIR stochastic interest rate model and ARIMA prepayment rate model conforming to China 's national conditions are established.The study show that using Monte-Carlo simulation can generate a random interest rate path,and that the future cash flow of mortgage-backed securities increases first and then decreases.%针对利率波动的随机性和我国现有的较为活跃的提前偿付行为对住房抵押支持证券的影响,本文运用Monte-Carlo模拟研究了住房抵押支持证券的现金流变化.通过分析影响住房抵押支持证券价格的因素——利率、提前偿付率、违约率,建立相应的 CIR 随机利率模型和符合我国国情的ARIMA提前偿付率模型.研究结果表明:运用Monte-Carlo模拟可以生成一条随机利率路径,得到住房抵押支持证券未来现金流先增后减.
Xu, Gaoqi; Zhu, Liqin; Ge, Tingyue; Liao, Shasha; Li, Na; Qi, Fang
2016-06-01
The objective of this study was to investigate the cumulative fraction of response of various voriconazole dosing regimens against six Candida and six Aspergillus spp. in immunocompromised children, immunocompromised adolescents, and adults. Using pharmacokinetic parameters and pharmacodynamic data, 5000-subject Monte Carlo simulations (MCSs) were conducted to evaluate the ability of simulated dosing strategies in terms of fAUC/MIC targets of voriconazole. According to the results of the MCSs, current voriconazole dosage regimens were all effective for children, adolescents and adults against Candida albicans, Candida parapsilosis and Candida orthopsilosis. For adults, dosing regimens of 4 mg/kg intravenous every 12 h (q12h) and 300 mg orally q12h were sufficient to treat fungal infections by six Candida spp. (C. albicans, C. parapsilosis, Candida tropicalis, Candida glabrata, Candida krusei and C. orthopsilosis) and five Aspergillus spp. (Aspergillus fumigatus, Aspergillus flavus, Aspergillus terreus, Aspergillus niger and Aspergillus nidulans). However, high doses should be recommended for children and adolescents in order to achieve better clinical efficacy against A. fumigatus and A. nidulans. The current voriconazole dosage regimens were all ineffective against A. niger for children and adolescents. All voriconazole dosage regimens were not optimal against Aspergillus versicolor. This is the first study to evaluate clinical therapy of various voriconazole dosing regimens against Candida and Aspergillus spp. infections in children, adolescents and adults using MCS. The pharmacokinetic/pharmacodynamic-based dosing strategy provided a theoretical rationale for identifying optimal voriconazole dosage regimens in children, adolescents and adults in order to maximise clinical response and minimise the probability of exposure-related toxicity.
Kong, Linghan; Wang, Weizong; Murphy, Anthony B.; Xia, Guangqing
2017-04-01
Microdischarges are an important type of plasma discharge that possess several unique characteristics, such as the presence of a stable glow discharge, high plasma density and intense excimer radiation, leading to several potential applications. The intense and controllable gas heating within the extremely small dimensions of microdischarges has been exploited in micro-thruster technologies by incorporating a micro-nozzle to generate the thrust. This kind of micro-thruster has a significantly improved specific impulse performance compared to conventional cold gas thrusters, and can meet the requirements arising from the emerging development and application of micro-spacecraft. In this paper, we performed a self-consistent 2D particle-in-cell simulation, with a Monte Carlo collision model, of a microdischarge operating in a prototype micro-plasma thruster with a hollow cylinder geometry and a divergent micro-nozzle. The model takes into account the thermionic electron emission including the Schottky effect, the secondary electron emission due to cathode bombardment by the plasma ions, several different collision processes, and a non-uniform argon background gas density in the cathode–anode gap. Results in the high-pressure (several hundreds of Torr), high-current (mA) operating regime showing the behavior of the plasma density, potential distribution, and energy flux towards the hollow cathode and anode are presented and discussed. In addition, the results of simulations showing the effect of different argon gas pressures, cathode material work function and discharge voltage on the operation of the microdischarge thruster are presented. Our calculated properties are compared with experimental data under similar conditions and qualitative and quantitative agreements are reached.
Monte Carlo simulations of ABC stacked kagome lattice films
Yerzhakov, H. V.; Plumer, M. L.; Whitehead, J. P.
2016-05-01
Properties of films of geometrically frustrated ABC stacked antiferromagnetic kagome layers are examined using Metropolis Monte Carlo simulations. The impact of having an easy-axis anisotropy on the surface layers and cubic anisotropy in the interior layers is explored. The spin structure at the surface is shown to be different from that of the bulk 3D fcc system, where surface axial anisotropy tends to align spins along the surface [1 1 1] normal axis. This alignment then propagates only weakly to the interior layers through exchange coupling. Results are shown for the specific heat, magnetization and sub-lattice order parameters for both surface and interior spins in three and six layer films as a function of increasing axial surface anisotropy. Relevance to the exchange bias phenomenon in IrMn3 films is discussed.
Monte Carlo simulations of ABC stacked kagome lattice films.
Yerzhakov, H V; Plumer, M L; Whitehead, J P
2016-05-18
Properties of films of geometrically frustrated ABC stacked antiferromagnetic kagome layers are examined using Metropolis Monte Carlo simulations. The impact of having an easy-axis anisotropy on the surface layers and cubic anisotropy in the interior layers is explored. The spin structure at the surface is shown to be different from that of the bulk 3D fcc system, where surface axial anisotropy tends to align spins along the surface [1 1 1] normal axis. This alignment then propagates only weakly to the interior layers through exchange coupling. Results are shown for the specific heat, magnetization and sub-lattice order parameters for both surface and interior spins in three and six layer films as a function of increasing axial surface anisotropy. Relevance to the exchange bias phenomenon in IrMn3 films is discussed.
Zhou, Chenggang; Landau, D. P.; Schulthess, Thomas C.
2006-01-01
By considering the appropriate finite-size effect, we explain the connection between Monte Carlo simulations of two-dimensional anisotropic Heisenberg antiferromagnet in a field and the early renormalization group calculation for the bicritical point in $2+\\epsilon$ dimensions. We found that the long length scale physics of the Monte Carlo simulations is indeed captured by the anisotropic nonlinear $\\sigma$ model. Our Monte Carlo data and analysis confirm that the bicritical point in two dime...
Status of Monte-Carlo Event Generators
Energy Technology Data Exchange (ETDEWEB)
Hoeche, Stefan; /SLAC
2011-08-11
Recent progress on general-purpose Monte-Carlo event generators is reviewed with emphasis on the simulation of hard QCD processes and subsequent parton cascades. Describing full final states of high-energy particle collisions in contemporary experiments is an intricate task. Hundreds of particles are typically produced, and the reactions involve both large and small momentum transfer. The high-dimensional phase space makes an exact solution of the problem impossible. Instead, one typically resorts to regarding events as factorized into different steps, ordered descending in the mass scales or invariant momentum transfers which are involved. In this picture, a hard interaction, described through fixed-order perturbation theory, is followed by multiple Bremsstrahlung emissions off initial- and final-state and, finally, by the hadronization process, which binds QCD partons into color-neutral hadrons. Each of these steps can be treated independently, which is the basic concept inherent to general-purpose event generators. Their development is nowadays often focused on an improved description of radiative corrections to hard processes through perturbative QCD. In this context, the concept of jets is introduced, which allows to relate sprays of hadronic particles in detectors to the partons in perturbation theory. In this talk, we briefly review recent progress on perturbative QCD in event generation. The main focus lies on the general-purpose Monte-Carlo programs HERWIG, PYTHIA and SHERPA, which will be the workhorses for LHC phenomenology. A detailed description of the physics models included in these generators can be found in [8]. We also discuss matrix-element generators, which provide the parton-level input for general-purpose Monte Carlo.
Mosaic crystal algorithm for Monte Carlo simulations
Seeger, P A
2002-01-01
An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)
A note on simultaneous Monte Carlo tests
DEFF Research Database (Denmark)
Hahn, Ute
In this short note, Monte Carlo tests of goodness of fit for data of the form X(t), t ∈ I are considered, that reject the null hypothesis if X(t) leaves an acceptance region bounded by an upper and lower curve for some t in I. A construction of the acceptance region is proposed that complies to a...... to a given target level of rejection, and yields exact p-values. The construction is based on pointwise quantiles, estimated from simulated realizations of X(t) under the null hypothesis....
A Monte Carlo algorithm for degenerate plasmas
Energy Technology Data Exchange (ETDEWEB)
Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Archimedes, the Free Monte Carlo simulator
Sellier, Jean Michel D
2012-01-01
Archimedes is the GNU package for Monte Carlo simulations of electron transport in semiconductor devices. The first release appeared in 2004 and since then it has been improved with many new features like quantum corrections, magnetic fields, new materials, GUI, etc. This document represents the first attempt to have a complete manual. Many of the Physics models implemented are described and a detailed description is presented to make the user able to write his/her own input deck. Please, feel free to contact the author if you want to contribute to the project.
Cluster hybrid Monte Carlo simulation algorithms
Plascak, J. A.; Ferrenberg, Alan M.; Landau, D. P.
2002-06-01
We show that addition of Metropolis single spin flips to the Wolff cluster-flipping Monte Carlo procedure leads to a dramatic increase in performance for the spin-1/2 Ising model. We also show that adding Wolff cluster flipping to the Metropolis or heat bath algorithms in systems where just cluster flipping is not immediately obvious (such as the spin-3/2 Ising model) can substantially reduce the statistical errors of the simulations. A further advantage of these methods is that systematic errors introduced by the use of imperfect random-number generation may be largely healed by hybridizing single spin flips with cluster flipping.
Introduction to Cluster Monte Carlo Algorithms
Luijten, E.
This chapter provides an introduction to cluster Monte Carlo algorithms for classical statistical-mechanical systems. A brief review of the conventional Metropolis algorithm is given, followed by a detailed discussion of the lattice cluster algorithm developed by Swendsen and Wang and the single-cluster variant introduced by Wolff. For continuum systems, the geometric cluster algorithm of Dress and Krauth is described. It is shown how their geometric approach can be generalized to incorporate particle interactions beyond hardcore repulsions, thus forging a connection between the lattice and continuum approaches. Several illustrative examples are discussed.
Energy Technology Data Exchange (ETDEWEB)
Marcus, Ryan C. [Los Alamos National Laboratory
2012-07-24
Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.
State-of-the-art Monte Carlo 1988
Energy Technology Data Exchange (ETDEWEB)
Soran, P.D.
1988-06-28
Particle transport calculations in highly dimensional and physically complex geometries, such as detector calibration, radiation shielding, space reactors, and oil-well logging, generally require Monte Carlo transport techniques. Monte Carlo particle transport can be performed on a variety of computers ranging from APOLLOs to VAXs. Some of the hardware and software developments, which now permit Monte Carlo methods to be routinely used, are reviewed in this paper. The development of inexpensive, large, fast computer memory, coupled with fast central processing units, permits Monte Carlo calculations to be performed on workstations, minicomputers, and supercomputers. The Monte Carlo renaissance is further aided by innovations in computer architecture and software development. Advances in vectorization and parallelization architecture have resulted in the development of new algorithms which have greatly reduced processing times. Finally, the renewed interest in Monte Carlo has spawned new variance reduction techniques which are being implemented in large computer codes. 45 refs.
Directory of Open Access Journals (Sweden)
José Luiz Ferreira Martins
2011-09-01
Full Text Available O objetivo deste artigo é o de analisar a viabilidade da utilização do método de Monte Carlo para estimar a produtividade na soldagem de tubulações industriais de aço carbono com base em amostras pequenas. O estudo foi realizado através de uma análise de uma amostra de referência contendo dados de produtividade de 160 juntas soldadas pelo processo Eletrodo Revestido na REDUC (refinaria de Duque de Caxias, utilizando o software ControlTub 5.3. A partir desses dados foram retiradas de forma aleatória, amostras com, respectivamente, 10, 15 e 20 elementos e executadas simulações pelo método de Monte Carlo. Comparando-se os resultados da amostra com 160 elementos e os dados gerados por simulação se observa que bons resultados podem ser obtidos usando o método de Monte Carlo para estimativa da produtividade da soldagem. Por outro lado, na indústria da construção brasileira o valor da média de produtividade é normalmente usado como um indicador de produtividade e é baseado em dados históricos de outros projetos coletados e avaliados somente após a conclusão do projeto, o que é uma limitação. Este artigo apresenta uma ferramenta para avaliação da execução em tempo real, permitindo ajustes nas estimativas e monitoramento de produtividade durante o empreendimento. Da mesma forma, em licitações, orçamentos e estimativas de prazo, a utilização desta técnica permite a adoção de outras estimativas diferentes da produtividade média, que é comumente usada e como alternativa, se sugerem três critérios: produtividade otimista, média e pessimista.The aim of this article is to analyze the feasibility of using Monte Carlo method to estimate productivity in industrial pipes welding of carbon steel based on small samples. The study was carried out through an analysis of a reference sample containing productivity data of 160 welded joints by SMAW process in REDUC (Duque de Caxias Refinery, using ControlTub 5.3 software
Vervaeke, Michael; Lahti, Markku; Karpinnen, Mikko; Debaes, Christof; Volckaerts, Bart; Karioja, Pentti; Thienpont, Hugo
2006-04-01
In this paper we give an overview of the fabrication and assembly induced performance degradation of an intra-multi-chip-module free-space optical interconnect, integrating micro-lenses and a deflection prism above a dense opto-electronic chip. The proposed component is used to demonstrate the capabilities of an accurate micro-optical rapid prototype technique, namely the Deep Proton Writing (DPW). To evaluate the accuracy of DPW and to assess whether our assembly scheme will provide us with a reasonable process yield, we have built a simulation framework combining mechanical Monte Carlo analysis with optical simulations. Both the technological requirements to ensure a high process yield, and the specifications of our in-house DPW technology are discussed. Therefore, we first conduct a sensitivity analysis and we subsequently simulate the effect of combined errors using a Monte Carlo simulation. We are able to investigate the effect of a technology accuracy enhancement on the fabrication and assembly yield by scaling the standard deviation of the errors proportionally to each sensitivity interval. We estimate that 40% of the systems fabricated with DPW will show an optical transmission efficiency above -4.32 dB, which is -3 dB below the theoretical obtainable value. We also discuss our efforts to implement an opto-mechanical Monte Carlo simulator. It enables us to address specific issues not directly related with the micro-optical or DPW components, such as the influence of glueing layers and structures that allow for self-alignment, by combining mechanical tolerancing algorithms with optical simulation software. More in particular we determined that DPW provides ample accuracy to meet the requirements to obtain a high manufacturing yield. Finally, we shortly highlight the basic layout of a completed demonstrator. The adhesive bonding of opto-electronic devices in their package is subject to further improvement to enhance the tilt accuracy of the devices with
Discrete diffusion Monte Carlo for frequency-dependent radiative transfer
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory
2010-11-17
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.
Alternative Monte Carlo Approach for General Global Illumination
Institute of Scientific and Technical Information of China (English)
徐庆; 李朋; 徐源; 孙济洲
2004-01-01
An alternative Monte Carlo strategy for the computation of global illumination problem was presented.The proposed approach provided a new and optimal way for solving Monte Carlo global illumination based on the zero variance importance sampling procedure. A new importance driven Monte Carlo global illumination algorithm in the framework of the new computing scheme was developed and implemented. Results, which were obtained by rendering test scenes, show that this new framework and the newly derived algorithm are effective and promising.
A Simple Monte Carlo Method for Locating the Three-dimensional Critical Slip Surface of a Slope
Institute of Scientific and Technical Information of China (English)
XIE Mowen
2004-01-01
Based on the assumption of the plain-strain problem, various optimization or random search methods have been developed for locating the critical slip surfaces in slope-stability analysis, but none of such methods is applicable to the 3D case. In this paper, a simple Monte Carlo random simulation method is proposed to identify the 3D critical slip surface. Assuming the initial slip to be the lower part of a slip ellipsoid, the 3D critical slip surface is located by means of a minimized 3D safety factor. A column-based 3D slope stability analysis model is used to calculate this factor. In this study, some practical cases of known minimum safety factors and critical slip surfaces in 2D analysis are extended to 3D slope problems to locate the critical slip surfaces. Compared with the 2D result, the resulting 3D critical slip surface has no apparent difference in terms of only cross section, but the associated 3D safety factor is definitely higher.
Energy Technology Data Exchange (ETDEWEB)
Pacilio, M.; Lanconelli, N.; Lo Meo, S.; Betti, M.; Montani, L.; Torres Aroche, L. A.; Coca Perez, M. A. [Department of Medical Physics, Azienda Ospedaliera S. Camillo Forlanini, Piazza Forlanini 1, Rome 00151 (Italy); Department of Physics, Alma Mater Studiorum University of Bologna, Viale Berti-Pichat 6/2, Bologna 40127 (Italy); Department of Medical Physics, Azienda Ospedaliera S. Camillo Forlanini, Piazza Forlanini 1, Rome 00151 (Italy); Department of Medical Physics, Azienda Ospedaliera Sant' Andrea, Via di Grotarossa 1035, Rome 00189 (Italy); Department of Medical Physics, Center for Clinical Researches, Calle 34 North 4501, Havana 11300 (Cuba)
2009-05-15
Several updated Monte Carlo (MC) codes are available to perform calculations of voxel S values for radionuclide targeted therapy. The aim of this work is to analyze the differences in the calculations obtained by different MC codes and their impact on absorbed dose evaluations performed by voxel dosimetry. Voxel S values for monoenergetic sources (electrons and photons) and different radionuclides ({sup 90}Y, {sup 131}I, and {sup 188}Re) were calculated. Simulations were performed in soft tissue. Three general-purpose MC codes were employed for simulating radiation transport: MCNP4C, EGSnrc, and GEANT4. The data published by the MIRD Committee in Pamphlet No. 17, obtained with the EGS4 MC code, were also included in the comparisons. The impact of the differences (in terms of voxel S values) among the MC codes was also studied by convolution calculations of the absorbed dose in a volume of interest. For uniform activity distribution of a given radionuclide, dose calculations were performed on spherical and elliptical volumes, varying the mass from 1 to 500 g. For simulations with monochromatic sources, differences for self-irradiation voxel S values were mostly confined within 10% for both photons and electrons, but with electron energy less than 500 keV, the voxel S values referred to the first neighbor voxels showed large differences (up to 130%, with respect to EGSnrc) among the updated MC codes. For radionuclide simulations, noticeable differences arose in voxel S values, especially in the bremsstrahlung tails, or when a high contribution from electrons with energy of less than 500 keV is involved. In particular, for {sup 90}Y the updated codes showed a remarkable divergence in the bremsstrahlung region (up to about 90% in terms of voxel S values) with respect to the EGS4 code. Further, variations were observed up to about 30%, for small source-target voxel distances, when low-energy electrons cover an important part of the emission spectrum of the radionuclide
Buffa, F M; Nahum, A E
2000-10-01
The aim of this work is to investigate the influence of the statistical fluctuations of Monte Carlo (MC) dose distributions on the dose volume histograms (DVHs) and radiobiological models, in particular the Poisson model for tumour control probability (tcp). The MC matrix is characterized by a mean dose in each scoring voxel, d, and a statistical error on the mean dose, sigma(d); whilst the quantities d and sigma(d) depend on many statistical and physical parameters, here we consider only their dependence on the phantom voxel size and the number of histories from the radiation source. Dose distributions from high-energy photon beams have been analysed. It has been found that the DVH broadens when increasing the statistical noise of the dose distribution, and the tcp calculation systematically underestimates the real tumour control value, defined here as the value of tumour control when the statistical error of the dose distribution tends to zero. When increasing the number of energy deposition events, either by increasing the voxel dimensions or increasing the number of histories from the source, the DVH broadening decreases and tcp converges to the 'correct' value. It is shown that the underestimation of the tcp due to the noise in the dose distribution depends on the degree of heterogeneity of the radiobiological parameters over the population; in particular this error decreases with increasing the biological heterogeneity, whereas it becomes significant in the hypothesis of a radiosensitivity assay for single patients, or for subgroups of patients. It has been found, for example, that when the voxel dimension is changed from a cube with sides of 0.5 cm to a cube with sides of 0.25 cm (with a fixed number of histories of 10(8) from the source), the systematic error in the tcp calculation is about 75% in the homogeneous hypothesis, and it decreases to a minimum value of about 15% in a case of high radiobiological heterogeneity. The possibility of using the error
Use of Monte Carlo methods in environmental risk assessments at the INEL: Applications and issues
Energy Technology Data Exchange (ETDEWEB)
Harris, G.; Van Horn, R.
1996-06-01
The EPA is increasingly considering the use of probabilistic risk assessment techniques as an alternative or refinement of the current point estimate of risk. This report provides an overview of the probabilistic technique called Monte Carlo Analysis. Advantages and disadvantages of implementing a Monte Carlo analysis over a point estimate analysis for environmental risk assessment are discussed. The general methodology is provided along with an example of its implementation. A phased approach to risk analysis that allows iterative refinement of the risk estimates is recommended for use at the INEL.
Multiple Monte Carlo Testing with Applications in Spatial Point Processes
DEFF Research Database (Denmark)
Mrkvička, Tomáš; Myllymäki, Mari; Hahn, Ute
with a function as the test statistic, 3) several Monte Carlo tests with functions as test statistics. The rank test has correct (global) type I error in each case and it is accompanied with a p-value and with a graphical interpretation which shows which subtest or which distances of the used test function......The rank envelope test (Myllym\\"aki et al., Global envelope tests for spatial processes, arXiv:1307.0239 [stat.ME]) is proposed as a solution to multiple testing problem for Monte Carlo tests. Three different situations are recognized: 1) a few univariate Monte Carlo tests, 2) a Monte Carlo test...
Discrete range clustering using Monte Carlo methods
Chatterji, G. B.; Sridhar, B.
1993-01-01
For automatic obstacle avoidance guidance during rotorcraft low altitude flight, a reliable model of the nearby environment is needed. Such a model may be constructed by applying surface fitting techniques to the dense range map obtained by active sensing using radars. However, for covertness, passive sensing techniques using electro-optic sensors are desirable. As opposed to the dense range map obtained via active sensing, passive sensing algorithms produce reliable range at sparse locations, and therefore, surface fitting techniques to fill the gaps in the range measurement are not directly applicable. Both for automatic guidance and as a display for aiding the pilot, these discrete ranges need to be grouped into sets which correspond to objects in the nearby environment. The focus of this paper is on using Monte Carlo methods for clustering range points into meaningful groups. One of the aims of the paper is to explore whether simulated annealing methods offer significant advantage over the basic Monte Carlo method for this class of problems. We compare three different approaches and present application results of these algorithms to a laboratory image sequence and a helicopter flight sequence.
Information Geometry and Sequential Monte Carlo
Sim, Aaron; Stumpf, Michael P H
2012-01-01
This paper explores the application of methods from information geometry to the sequential Monte Carlo (SMC) sampler. In particular the Riemannian manifold Metropolis-adjusted Langevin algorithm (mMALA) is adapted for the transition kernels in SMC. Similar to its function in Markov chain Monte Carlo methods, the mMALA is a fully adaptable kernel which allows for efficient sampling of high-dimensional and highly correlated parameter spaces. We set up the theoretical framework for its use in SMC with a focus on the application to the problem of sequential Bayesian inference for dynamical systems as modelled by sets of ordinary differential equations. In addition, we argue that defining the sequence of distributions on geodesics optimises the effective sample sizes in the SMC run. We illustrate the application of the methodology by inferring the parameters of simulated Lotka-Volterra and Fitzhugh-Nagumo models. In particular we demonstrate that compared to employing a standard adaptive random walk kernel, the SM...
Quantum Monte Carlo Calculations of Neutron Matter
Carlson, J; Ravenhall, D G
2003-01-01
Uniform neutron matter is approximated by a cubic box containing a finite number of neutrons, with periodic boundary conditions. We report variational and Green's function Monte Carlo calculations of the ground state of fourteen neutrons in a periodic box using the Argonne $\\vep $ two-nucleon interaction at densities up to one and half times the nuclear matter density. The effects of the finite box size are estimated using variational wave functions together with cluster expansion and chain summation techniques. They are small at subnuclear densities. We discuss the expansion of the energy of low-density neutron gas in powers of its Fermi momentum. This expansion is strongly modified by the large nn scattering length, and does not begin with the Fermi-gas kinetic energy as assumed in both Skyrme and relativistic mean field theories. The leading term of neutron gas energy is ~ half the Fermi-gas kinetic energy. The quantum Monte Carlo results are also used to calibrate the accuracy of variational calculations ...
THE MCNPX MONTE CARLO RADIATION TRANSPORT CODE
Energy Technology Data Exchange (ETDEWEB)
WATERS, LAURIE S. [Los Alamos National Laboratory; MCKINNEY, GREGG W. [Los Alamos National Laboratory; DURKEE, JOE W. [Los Alamos National Laboratory; FENSIN, MICHAEL L. [Los Alamos National Laboratory; JAMES, MICHAEL R. [Los Alamos National Laboratory; JOHNS, RUSSELL C. [Los Alamos National Laboratory; PELOWITZ, DENISE B. [Los Alamos National Laboratory
2007-01-10
MCNPX (Monte Carlo N-Particle eXtended) is a general-purpose Monte Carlo radiation transport code with three-dimensional geometry and continuous-energy transport of 34 particles and light ions. It contains flexible source and tally options, interactive graphics, and support for both sequential and multi-processing computer platforms. MCNPX is based on MCNP4B, and has been upgraded to most MCNP5 capabilities. MCNP is a highly stable code tracking neutrons, photons and electrons, and using evaluated nuclear data libraries for low-energy interaction probabilities. MCNPX has extended this base to a comprehensive set of particles and light ions, with heavy ion transport in development. Models have been included to calculate interaction probabilities when libraries are not available. Recent additions focus on the time evolution of residual nuclei decay, allowing calculation of transmutation and delayed particle emission. MCNPX is now a code of great dynamic range, and the excellent neutronics capabilities allow new opportunities to simulate devices of interest to experimental particle physics; particularly calorimetry. This paper describes the capabilities of the current MCNPX version 2.6.C, and also discusses ongoing code development.
Chemical application of diffusion quantum Monte Carlo
Reynolds, P. J.; Lester, W. A., Jr.
1983-10-01
The diffusion quantum Monte Carlo (QMC) method gives a stochastic solution to the Schroedinger equation. As an example the singlet-triplet splitting of the energy of the methylene molecule CH2 is given. The QMC algorithm was implemented on the CYBER 205, first as a direct transcription of the algorithm running on our VAX 11/780, and second by explicitly writing vector code for all loops longer than a crossover length C. The speed of the codes relative to one another as a function of C, and relative to the VAX is discussed. Since CH2 has only eight electrons, most of the loops in this application are fairly short. The longest inner loops run over the set of atomic basis functions. The CPU time dependence obtained versus the number of basis functions is discussed and compared with that obtained from traditional quantum chemistry codes and that obtained from traditional computer architectures. Finally, preliminary work on restructuring the algorithm to compute the separate Monte Carlo realizations in parallel is discussed.
Quantum Monte Carlo Endstation for Petascale Computing
Energy Technology Data Exchange (ETDEWEB)
Lubos Mitas
2011-01-26
NCSU research group has been focused on accomplising the key goals of this initiative: establishing new generation of quantum Monte Carlo (QMC) computational tools as a part of Endstation petaflop initiative for use at the DOE ORNL computational facilities and for use by computational electronic structure community at large; carrying out high accuracy quantum Monte Carlo demonstration projects in application of these tools to the forefront electronic structure problems in molecular and solid systems; expanding the impact of QMC methods and approaches; explaining and enhancing the impact of these advanced computational approaches. In particular, we have developed quantum Monte Carlo code (QWalk, www.qwalk.org) which was significantly expanded and optimized using funds from this support and at present became an actively used tool in the petascale regime by ORNL researchers and beyond. These developments have been built upon efforts undertaken by the PI's group and collaborators over the period of the last decade. The code was optimized and tested extensively on a number of parallel architectures including petaflop ORNL Jaguar machine. We have developed and redesigned a number of code modules such as evaluation of wave functions and orbitals, calculations of pfaffians and introduction of backflow coordinates together with overall organization of the code and random walker distribution over multicore architectures. We have addressed several bottlenecks such as load balancing and verified efficiency and accuracy of the calculations with the other groups of the Endstation team. The QWalk package contains about 50,000 lines of high quality object-oriented C++ and includes also interfaces to data files from other conventional electronic structure codes such as Gamess, Gaussian, Crystal and others. This grant supported PI for one month during summers, a full-time postdoc and partially three graduate students over the period of the grant duration, it has resulted in 13
Monte Carlo simulation for simultaneous particle coagulation and deposition
Institute of Scientific and Technical Information of China (English)
ZHAO; Haibo; ZHENG; Chuguang
2006-01-01
The process of dynamic evolution in dispersed systems due to simultaneous particle coagulation and deposition is described mathematically by general dynamic equation (GDE). Monte Carlo (MC) method is an important approach of numerical solutions of GDE. However, constant-volume MC method exhibits the contradictory of low computation cost and high computation precision owing to the fluctuation of the number of simulation particles; constant-number MC method can hardly be applied to engineering application and general scientific quantitative analysis due to the continual contraction or expansion of computation domain. In addition, the two MC methods depend closely on the "subsystem" hypothesis, which constraints their expansibility and the scope of application. A new multi-Monte Carlo (MMC) method is promoted to take account of GDE for simultaneous particle coagulation and deposition. MMC method introduces the concept of "weighted fictitious particle" and is based on the "time-driven" technique. Furthermore MMC method maintains synchronously the computational domain and the total number of fictitious particles, which results in the latent expansibility of simulation for boundary condition, the space evolution of particle size distribution and even particle dynamics. The simulation results of MMC method for two special cases in which analytical solutions exist agree with analytical solutions well, which proves that MMC method has high and stable computational precision and low computation cost because of the constant and limited number of fictitious particles. Lastly the source of numerical error and the relative error of MMC method are analyzed, respectively.
Utilizing Monte Carlo Simulations to Optimize Institutional Empiric Antipseudomonal Therapy
Directory of Open Access Journals (Sweden)
Sarah J. Tennant
2015-12-01
Full Text Available Pseudomonas aeruginosa is a common pathogen implicated in nosocomial infections with increasing resistance to a limited arsenal of antibiotics. Monte Carlo simulation provides antimicrobial stewardship teams with an additional tool to guide empiric therapy. We modeled empiric therapies with antipseudomonal β-lactam antibiotic regimens to determine which were most likely to achieve probability of target attainment (PTA of ≥90%. Microbiological data for P. aeruginosa was reviewed for 2012. Antibiotics modeled for intermittent and prolonged infusion were aztreonam, cefepime, meropenem, and piperacillin/tazobactam. Using minimum inhibitory concentrations (MICs from institution-specific isolates, and pharmacokinetic and pharmacodynamic parameters from previously published studies, a 10,000-subject Monte Carlo simulation was performed for each regimen to determine PTA. MICs from 272 isolates were included in this analysis. No intermittent infusion regimens achieved PTA ≥90%. Prolonged infusions of cefepime 2000 mg Q8 h, meropenem 1000 mg Q8 h, and meropenem 2000 mg Q8 h demonstrated PTA of 93%, 92%, and 100%, respectively. Prolonged infusions of piperacillin/tazobactam 4.5 g Q6 h and aztreonam 2 g Q8 h failed to achieved PTA ≥90% but demonstrated PTA of 81% and 73%, respectively. Standard doses of β-lactam antibiotics as intermittent infusion did not achieve 90% PTA against P. aeruginosa isolated at our institution; however, some prolonged infusions were able to achieve these targets.
Subtle Monte Carlo Updates in Dense Molecular Systems.
Bottaro, Sandro; Boomsma, Wouter; E Johansson, Kristoffer; Andreetta, Christian; Hamelryck, Thomas; Ferkinghoff-Borg, Jesper
2012-02-14
Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce a kinetic algorithm, CRISP, that greatly enhances the sampling efficiency in all-atom MC simulations of dense systems. The algorithm is based on an exact analytical solution to the classic chain-closure problem, making it possible to express the interdependencies among degrees of freedom in the molecule as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions.
Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo
Martinez, Josue G.
2010-06-01
The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.
Penalized Splines for Smooth Representation of High-dimensional Monte Carlo Datasets
Whitehorn, Nathan; Lafebre, Sven
2013-01-01
Detector response to a high-energy physics process is often estimated by Monte Carlo simulation. For purposes of data analysis, the results of this simulation are typically stored in large multi-dimensional histograms, which can quickly become both too large to easily store and manipulate and numerically problematic due to unfilled bins or interpolation artifacts. We describe here an application of the penalized spline technique to efficiently compute B-spline representations of such tables and discuss aspects of the resulting B-spline fits that simplify many common tasks in handling tabulated Monte Carlo data in high-energy physics analysis, in particular their use in maximum-likelihood fitting.
Data decomposition of Monte Carlo particle transport simulations via tally servers
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K., E-mail: paul.k.romano@gmail.com [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegala@mcs.anl.gov [Argonne National Laboratory, Theory and Computing Sciences, 9700 S Cass Ave., Argonne, IL 60439 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Smith, Kord, E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science and Engineering, 77 Massachusetts Ave., Cambridge, MA 02139 (United States)
2013-11-01
An algorithm for decomposing large tally data in Monte Carlo particle transport simulations is developed, analyzed, and implemented in a continuous-energy Monte Carlo code, OpenMC. The algorithm is based on a non-overlapping decomposition of compute nodes into tracking processors and tally servers. The former are used to simulate the movement of particles through the domain while the latter continuously receive and update tally data. A performance model for this approach is developed, suggesting that, for a range of parameters relevant to LWR analysis, the tally server algorithm should perform with minimal overhead on contemporary supercomputers. An implementation of the algorithm in OpenMC is then tested on the Intrepid and Titan supercomputers, supporting the key predictions of the model over a wide range of parameters. We thus conclude that the tally server algorithm is a successful approach to circumventing classical on-node memory constraints en route to unprecedentedly detailed Monte Carlo reactor simulations.
MCNP-REN a Monte Carlo tool for neutron detector design
Abhold, M E
2002-01-01
The development of neutron detectors makes extensive use of the predictions of detector response through the use of Monte Carlo techniques in conjunction with the point reactor model. Unfortunately, the point reactor model fails to accurately predict detector response in common applications. For this reason, the general Monte Carlo code developed at Los Alamos National Laboratory, Monte Carlo N-Particle (MCNP), was modified to simulate the pulse streams that would be generated by a neutron detector and normally analyzed by a shift register. This modified code, MCNP-Random Exponentially Distributed Neutron Source (MCNP-REN), along with the Time Analysis Program, predicts neutron detector response without using the point reactor model, making it unnecessary for the user to decide whether or not the assumptions of the point model are met for their application. MCNP-REN is capable of simulating standard neutron coincidence counting as well as neutron multiplicity counting. Measurements of mixed oxide fresh fuel w...
3D face modeling, analysis and recognition
Daoudi, Mohamed; Veltkamp, Remco
2013-01-01
3D Face Modeling, Analysis and Recognition presents methodologies for analyzing shapes of facial surfaces, develops computational tools for analyzing 3D face data, and illustrates them using state-of-the-art applications. The methodologies chosen are based on efficient representations, metrics, comparisons, and classifications of features that are especially relevant in the context of 3D measurements of human faces. These frameworks have a long-term utility in face analysis, taking into account the anticipated improvements in data collection, data storage, processing speeds, and application s
无人机地面目标定位精度蒙特卡罗仿真分析%Monte-CarloSimulation Analysis for UAV Ground Target Position Accuracy
Institute of Scientific and Technical Information of China (English)
李大健; 齐敏
2011-01-01
研究无人机定位准确性,针对定位系统模型误差问题,为了给无人侦察机地面目标定位系统的设计提供指导和依据,利用光电任务设备进行地面目标定位的数学模型和误差模型,对影响地面目标定位精度的误差因素进行了介绍,并利用蒙特卡罗分析方法对目标定位精度进行数学仿真,计算确定了影响系统定位精度的主要因素,指导系统精度指标进行更加合理的分配.验证结果,设计的系统满足大系统精度指标要求,并且避免了不必要的部件精度浪费,同时也证明了精度解决方法的有效和实用,可为定位系统设计提供了可靠依据.%In order to provide design basis for ground target position system of reconnaissance UAV, the mathematical model and error model of target position using electro-optical payload are derived. The error elements which influence the target position accuracy are introduced and the target position accuracy is simulated using Monte-Carlo analysis. The main error elements are also analyzed and the result helps a more reasonable system error allocation.Testing result of target position system designed on the accuracy analysis fully meets the system accuracy, avoids unnecessary component accuracy requirement, and saves the system cost. The application also proves the validity of Monte-Carlo simulation analysis for target position system.
Analysis for leak time of cylindrical vessel based on Monte-Carlo method%基于蒙特卡罗法的圆柱形储罐泄漏时间分析
Institute of Scientific and Technical Information of China (English)
于芳; 蒋军成; 张明广; 孙东亮
2011-01-01
The paper is aimed at presenting its leaking time duration theoretical calculation formula for the cylindrical vessels based on the Monte-Carlo method. As is known, the leaking duration is one of the key factors in quantitative risk assessments. By virtue of the current deterministic algorithm estimation, influenced by subjective factors, it is often the case that consequence analysis results may often result from the falsehood prediction of the direct cause disregard of the authentic reasons. Therefore, it is of great importance to derive the theoretical leak duration formula of the cylindrical vessels and depict it in detail based on the leak source models. For this purpose, it is of great need to point out the key affecting factors on the basis of the current studies. While introducing the Monte-Carlo method and stressing the influence of uncertainty caused by the input parameters,we tried to analyze the distribution of each parameter and worked a simulation flow chart of methodology process so as to obtain a more realistic result. A case simulation results also help us to acquire the distribution both of the probability density and the probability of the leaking duration under given leakage scenarios. The results of our calculation by using our formula reveal the distribution regularity of the actual leaking time and its relevant probability. According to the probability results, we have chosen the correspending results under 95 ％ confidence level as the credible maximum leaking time. Further comparative analysis proves that the Monte-Carlo method can work well in the quantitative estimation of leaking time as well as a more practicable and credible result for the assessment. Thus, it can be seen that the present research may have obvious theoretical value and engineering significance to the emergency aid decision-making. It can also provide data for reference in quantitative risk assessments.%泄漏持续时间是影响事故后果定量风险评价的关
Commensurabilities between ETNOs: a Monte Carlo survey
Marcos, C de la Fuente
2016-01-01
Many asteroids in the main and trans-Neptunian belts are trapped in mean motion resonances with Jupiter and Neptune, respectively. As a side effect, they experience accidental commensurabilities among themselves. These commensurabilities define characteristic patterns that can be used to trace the source of the observed resonant behaviour. Here, we explore systematically the existence of commensurabilities between the known ETNOs using their heliocentric and barycentric semimajor axes, their uncertainties, and Monte Carlo techniques. We find that the commensurability patterns present in the known ETNO population resemble those found in the main and trans-Neptunian belts. Although based on small number statistics, such patterns can only be properly explained if most, if not all, of the known ETNOs are subjected to the resonant gravitational perturbations of yet undetected trans-Plutonian planets. We show explicitly that some of the statistically significant commensurabilities are compatible with the Planet Nin...
Monte Carlo exploration of warped Higgsless models
Energy Technology Data Exchange (ETDEWEB)
Hewett, JoAnne L.; Lillie, Benjamin; Rizzo, Thomas Gerard [Stanford Linear Accelerator Center, 2575 Sand Hill Rd., Menlo Park, CA, 94025 (United States)]. E-mail: rizzo@slac.stanford.edu
2004-10-01
We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the SU(2){sub L} x SU(2){sub R} x U(1){sub B-L} gauge group in an AdS{sub 5} bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, {approx_equal} 10 TeV, in W{sub L}{sup +}W{sub L}{sup -} elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned. (author)
Monte Carlo Exploration of Warped Higgsless Models
Hewett, J L; Rizzo, T G
2004-01-01
We have performed a detailed Monte Carlo exploration of the parameter space for a warped Higgsless model of electroweak symmetry breaking in 5 dimensions. This model is based on the $SU(2)_L\\times SU(2)_R\\times U(1)_{B-L}$ gauge group in an AdS$_5$ bulk with arbitrary gauge kinetic terms on both the Planck and TeV branes. Constraints arising from precision electroweak measurements and collider data are found to be relatively easy to satisfy. We show, however, that the additional requirement of perturbative unitarity up to the cut-off, $\\simeq 10$ TeV, in $W_L^+W_L^-$ elastic scattering in the absence of dangerous tachyons eliminates all models. If successful models of this class exist, they must be highly fine-tuned.
Experimental Monte Carlo Quantum Process Certification
Steffen, L; Fedorov, A; Baur, M; Wallraff, A
2012-01-01
Experimental implementations of quantum information processing have now reached a level of sophistication where quantum process tomography is impractical. The number of experimental settings as well as the computational cost of the data post-processing now translates to days of effort to characterize even experiments with as few as 8 qubits. Recently a more practical approach to determine the fidelity of an experimental quantum process has been proposed, where the experimental data is compared directly to an ideal process using Monte Carlo sampling. Here we present an experimental implementation of this scheme in a circuit quantum electrodynamics setup to determine the fidelity of two qubit gates, such as the cphase and the cnot gate, and three qubit gates, such as the Toffoli gate and two sequential cphase gates.
Variable length trajectory compressible hybrid Monte Carlo
Nishimura, Akihiko
2016-01-01
Hybrid Monte Carlo (HMC) generates samples from a prescribed probability distribution in a configuration space by simulating Hamiltonian dynamics, followed by the Metropolis (-Hastings) acceptance/rejection step. Compressible HMC (CHMC) generalizes HMC to a situation in which the dynamics is reversible but not necessarily Hamiltonian. This article presents a framework to further extend the algorithm. Within the existing framework, each trajectory of the dynamics must be integrated for the same amount of (random) time to generate a valid Metropolis proposal. Our generalized acceptance/rejection mechanism allows a more deliberate choice of the integration time for each trajectory. The proposed algorithm in particular enables an effective application of variable step size integrators to HMC-type sampling algorithms based on reversible dynamics. The potential of our framework is further demonstrated by another extension of HMC which reduces the wasted computations due to unstable numerical approximations and corr...
On nonlinear Markov chain Monte Carlo
Andrieu, Christophe; Doucet, Arnaud; Del Moral, Pierre; 10.3150/10-BEJ307
2011-01-01
Let $\\mathscr{P}(E)$ be the space of probability measures on a measurable space $(E,\\mathcal{E})$. In this paper we introduce a class of nonlinear Markov chain Monte Carlo (MCMC) methods for simulating from a probability measure $\\pi\\in\\mathscr{P}(E)$. Nonlinear Markov kernels (see [Feynman--Kac Formulae: Genealogical and Interacting Particle Systems with Applications (2004) Springer]) $K:\\mathscr{P}(E)\\times E\\rightarrow\\mathscr{P}(E)$ can be constructed to, in some sense, improve over MCMC methods. However, such nonlinear kernels cannot be simulated exactly, so approximations of the nonlinear kernels are constructed using auxiliary or potentially self-interacting chains. Several nonlinear kernels are presented and it is demonstrated that, under some conditions, the associated approximations exhibit a strong law of large numbers; our proof technique is via the Poisson equation and Foster--Lyapunov conditions. We investigate the performance of our approximations with some simulations.
Monte Carlo Implementation of Polarized Hadronization
Matevosyan, Hrayr H; Thomas, Anthony W
2016-01-01
We study the polarized quark hadronization in a Monte Carlo (MC) framework based on the recent extension of the quark-jet framework, where a self-consistent treatment of the quark polarization transfer in a sequential hadronization picture has been presented. Here, we first adopt this approach for MC simulations of hadronization process with finite number of produced hadrons, expressing the relevant probabilities in terms of the eight leading twist quark-to-quark transverse momentum dependent (TMD) splitting functions (SFs) for elementary $q \\to q'+h$ transition. We present explicit expressions for the unpolarized and Collins fragmentation functions (FFs) of unpolarized hadrons emitted at rank two. Further, we demonstrate that all the current spectator-type model calculations of the leading twist quark-to-quark TMD SFs violate the positivity constraints, and propose quark model based ansatz for these input functions that circumvents the problem. We validate our MC framework by explicitly proving the absence o...
Gas discharges modeling by Monte Carlo technique
Directory of Open Access Journals (Sweden)
Savić Marija
2010-01-01
Full Text Available The basic assumption of the Townsend theory - that ions produce secondary electrons - is valid only in a very narrow range of the reduced electric field E/N. In accordance with the revised Townsend theory that was suggested by Phelps and Petrović, secondary electrons are produced in collisions of ions, fast neutrals, metastable atoms or photons with the cathode, or in gas phase ionizations by fast neutrals. In this paper we tried to build up a Monte Carlo code that can be used to calculate secondary electron yields for different types of particles. The obtained results are in good agreement with the analytical results of Phelps and. Petrović [Plasma Sourc. Sci. Technol. 8 (1999 R1].
Morse Monte Carlo Radiation Transport Code System
Energy Technology Data Exchange (ETDEWEB)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)
Variational Monte Carlo study of pentaquark states
Energy Technology Data Exchange (ETDEWEB)
Mark W. Paris
2005-07-01
Accurate numerical solution of the five-body Schrodinger equation is effected via variational Monte Carlo. The spectrum is assumed to exhibit a narrow resonance with strangeness S=+1. A fully antisymmetrized and pair-correlated five-quark wave function is obtained for the assumed non-relativistic Hamiltonian which has spin, isospin, and color dependent pair interactions and many-body confining terms which are fixed by the non-exotic spectra. Gauge field dynamics are modeled via flux tube exchange factors. The energy determined for the ground states with J=1/2 and negative (positive) parity is 2.22 GeV (2.50 GeV). A lower energy negative parity state is consistent with recent lattice results. The short-range structure of the state is analyzed via its diquark content.
Monte Carlo simulation of neutron scattering instruments
Energy Technology Data Exchange (ETDEWEB)
Seeger, P.A.; Daemen, L.L.; Hjelm, R.P. Jr.
1998-12-01
A code package consisting of the Monte Carlo Library MCLIB, the executing code MC{_}RUN, the web application MC{_}Web, and various ancillary codes is proposed as an open standard for simulation of neutron scattering instruments. The architecture of the package includes structures to define surfaces, regions, and optical elements contained in regions. A particle is defined by its vector position and velocity, its time of flight, its mass and charge, and a polarization vector. The MC{_}RUN code handles neutron transport and bookkeeping, while the action on the neutron within any region is computed using algorithms that may be deterministic, probabilistic, or a combination. Complete versatility is possible because the existing library may be supplemented by any procedures a user is able to code. Some examples are shown.
Accurate barrier heights using diffusion Monte Carlo
Krongchon, Kittithat; Wagner, Lucas K
2016-01-01
Fixed node diffusion Monte Carlo (DMC) has been performed on a test set of forward and reverse barrier heights for 19 non-hydrogen-transfer reactions, and the nodal error has been assessed. The DMC results are robust to changes in the nodal surface, as assessed by using different mean-field techniques to generate single determinant wave functions. Using these single determinant nodal surfaces, DMC results in errors of 1.5(5) kcal/mol on barrier heights. Using the large data set of DMC energies, we attempted to find good descriptors of the fixed node error. It does not correlate with a number of descriptors including change in density, but does correlate with the gap between the highest occupied and lowest unoccupied orbital energies in the mean-field calculation.
Modeling neutron guides using Monte Carlo simulations
Wang, D Q; Crow, M L; Wang, X L; Lee, W T; Hubbard, C R
2002-01-01
Four neutron guide geometries, straight, converging, diverging and curved, were characterized using Monte Carlo ray-tracing simulations. The main areas of interest are the transmission of the guides at various neutron energies and the intrinsic time-of-flight (TOF) peak broadening. Use of a delta-function time pulse from a uniform Lambert neutron source allows one to quantitatively simulate the effect of guides' geometry on the TOF peak broadening. With a converging guide, the intensity and the beam divergence increases while the TOF peak width decreases compared with that of a straight guide. By contrast, use of a diverging guide decreases the intensity and the beam divergence, and broadens the width (in TOF) of the transmitted neutron pulse.
Reporting Monte Carlo Studies in Structural Equation Modeling
Boomsma, Anne
2013-01-01
In structural equation modeling, Monte Carlo simulations have been used increasingly over the last two decades, as an inventory from the journal Structural Equation Modeling illustrates. Reaching out to a broad audience, this article provides guidelines for reporting Monte Carlo studies in that fiel
Quantum Monte Carlo Simulations : Algorithms, Limitations and Applications
Raedt, H. De
1992-01-01
A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown
Quantum Monte Carlo using a Stochastic Poisson Solver
Energy Technology Data Exchange (ETDEWEB)
Das, D; Martin, R M; Kalos, M H
2005-05-06
Quantum Monte Carlo (QMC) is an extremely powerful method to treat many-body systems. Usually quantum Monte Carlo has been applied in cases where the interaction potential has a simple analytic form, like the 1/r Coulomb potential. However, in a complicated environment as in a semiconductor heterostructure, the evaluation of the interaction itself becomes a non-trivial problem. Obtaining the potential from any grid-based finite-difference method, for every walker and every step is unfeasible. We demonstrate an alternative approach of solving the Poisson equation by a classical Monte Carlo within the overall quantum Monte Carlo scheme. We have developed a modified ''Walk On Spheres'' algorithm using Green's function techniques, which can efficiently account for the interaction energy of walker configurations, typical of quantum Monte Carlo algorithms. This stochastically obtained potential can be easily incorporated within popular quantum Monte Carlo techniques like variational Monte Carlo (VMC) or diffusion Monte Carlo (DMC). We demonstrate the validity of this method by studying a simple problem, the polarization of a helium atom in the electric field of an infinite capacitor.
The Monte Carlo Method. Popular Lectures in Mathematics.
Sobol', I. M.
The Monte Carlo Method is a method of approximately solving mathematical and physical problems by the simulation of random quantities. The principal goal of this booklet is to suggest to specialists in all areas that they will encounter problems which can be solved by the Monte Carlo Method. Part I of the booklet discusses the simulation of random…
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
QWalk: A Quantum Monte Carlo Program for Electronic Structure
Wagner, Lucas K; Mitas, Lubos
2007-01-01
We describe QWalk, a new computational package capable of performing Quantum Monte Carlo electronic structure calculations for molecules and solids with many electrons. We describe the structure of the program and its implementation of Quantum Monte Carlo methods. It is open-source, licensed under the GPL, and available at the web site http://www.qwalk.org
QUANTUM MONTE-CARLO SIMULATIONS - ALGORITHMS, LIMITATIONS AND APPLICATIONS
DERAEDT, H
1992-01-01
A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown
Recent Developments in Quantum Monte Carlo: Methods and Applications
Aspuru-Guzik, Alan; Austin, Brian; Domin, Dominik; Galek, Peter T. A.; Handy, Nicholas; Prasad, Rajendra; Salomon-Ferrer, Romelia; Umezawa, Naoto; Lester, William A.
2007-12-01
The quantum Monte Carlo method in the diffusion Monte Carlo form has become recognized for its capability of describing the electronic structure of atomic, molecular and condensed matter systems to high accuracy. This talk will briefly outline the method with emphasis on recent developments connected with trial function construction, linear scaling, and applications to selected systems.
Sensitivity of Monte Carlo simulations to input distributions
Energy Technology Data Exchange (ETDEWEB)
RamoRao, B. S.; Srikanta Mishra, S.; McNeish, J.; Andrews, R. W.
2001-07-01
The sensitivity of the results of a Monte Carlo simulation to the shapes and moments of the probability distributions of the input variables is studied. An economical computational scheme is presented as an alternative to the replicate Monte Carlo simulations and is explained with an illustrative example. (Author) 4 refs.
CERN Summer Student Report 2016 Monte Carlo Data Base Improvement
Caciulescu, Alexandru Razvan
2016-01-01
During my Summer Student project I worked on improving the Monte Carlo Data Base and MonALISA services for the ALICE Collaboration. The project included learning the infrastructure for tracking and monitoring of the Monte Carlo productions as well as developing a new RESTful API for seamless integration with the JIRA issue tracking framework.
Practical schemes for accurate forces in quantum Monte Carlo
Moroni, S.; Saccani, S.; Filippi, C.
2014-01-01
While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Accelerated GPU based SPECT Monte Carlo simulations
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-01
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: 99m Tc, 111In and 131I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational efficiency
Quantum Monte Carlo methods algorithms for lattice models
Gubernatis, James; Werner, Philipp
2016-01-01
Featuring detailed explanations of the major algorithms used in quantum Monte Carlo simulations, this is the first textbook of its kind to provide a pedagogical overview of the field and its applications. The book provides a comprehensive introduction to the Monte Carlo method, its use, and its foundations, and examines algorithms for the simulation of quantum many-body lattice problems at finite and zero temperature. These algorithms include continuous-time loop and cluster algorithms for quantum spins, determinant methods for simulating fermions, power methods for computing ground and excited states, and the variational Monte Carlo method. Also discussed are continuous-time algorithms for quantum impurity models and their use within dynamical mean-field theory, along with algorithms for analytically continuing imaginary-time quantum Monte Carlo data. The parallelization of Monte Carlo simulations is also addressed. This is an essential resource for graduate students, teachers, and researchers interested in ...
Institute of Scientific and Technical Information of China (English)
胡川; 姚建伟
2012-01-01
建立CRH2型动车组系统及其走行子系统、牵引传动子系统、制动子系统、高压电器子系统、辅助供电子系统以及网络控制子系统的故障树,在此基础上运用蒙特卡洛方法和MATLAB软件,对动车组的可靠性进行仿真分析.结果表明:基于故障树分析的蒙特卡洛仿真方法能快速、准确地计算动车组整车的可靠性;当动车组各基本部件发生故障的概率服从指数分布时,整个动车组系统发生故障的概率也服从指数分布；动车组最重要的3个分系统依次为空气供给分系统、接地保护开关和高压设备箱分系统以及牵引传动分系统.%The fault trees of the CRH2 EMU system as well as its subsystems of running, traction drive, braking, high voltage apparatus, auxiliary power supply and network control were established. On that basis, Monte Carlo method and MATLAB software were applied to simulate and analyze the reliability of the EMU. The results indicate that, Monte Carlo simulation method which is based on fault tree analysis, can rapidly and accurately calculate the reliability of the whole EMU. If the fault probabilities of EMU various basic components obey exponential distribution, then does the fault probability of the whole EMU system. The three most important partial systems of the EMU are, in order, the air supply partial system, the grounding protection switch and high voltage equipment box partial system and traction drive partial system.
Monte Carlo simulations for design of the KFUPM PGNAA facility
Naqvi, A A; Maslehuddin, M; Kidwai, S
2003-01-01
Monte Carlo simulations were carried out to design a 2.8 MeV neutron-based prompt gamma ray neutron activation analysis (PGNAA) setup for elemental analysis of cement samples. The elemental analysis was carried out using prompt gamma rays produced through capture of thermal neutrons in sample nuclei. The basic design of the PGNAA setup consists of a cylindrical cement sample enclosed in a cylindrical high-density polyethylene moderator placed between a neutron source and a gamma ray detector. In these simulations the predominant geometrical parameters of the PGNAA setup were optimized, including moderator size, sample size and shielding of the detector. Using the results of the simulations, an experimental PGNAA setup was then fabricated at the 350 kV Accelerator Laboratory of this University. The design calculations were checked experimentally through thermal neutron flux measurements inside the PGNAA moderator. A test prompt gamma ray spectrum of the PGNAA setup was also acquired from a Portland cement samp...
Energy Technology Data Exchange (ETDEWEB)
Esquivel E, J.; Ramirez S, J. R.; Palacios H, J. C., E-mail: jaime.esquivel@fi.uaemex.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)
2011-11-15
The present work shows predicted prices of the uranium, using a neural network. The importance of predicting financial indexes of an energy resource, in this case, allows establishing budgetary measures, as well as the costs of the resource to medium period. The uranium is part of the main energy generating fuels and as such, its price rebounds in the financial analyses, due to this is appealed to predictive methods to obtain an outline referent to the financial behaviour that will have in a certain time. In this study, two methodologies are used for the prediction of the uranium price: the Monte Carlo method and the neural networks. These methods allow predicting the indexes of monthly costs, for a two years period, starting from the second bimonthly of 2011. For the prediction the uranium costs are used, registered from the year 2005. (Author)
Automated analysis of 3D echocardiography
Stralen, Marijn van
2009-01-01
In this thesis we aim at automating the analysis of 3D echocardiography, mainly targeting the functional analysis of the left ventricle. Manual analysis of these data is cumbersome, time-consuming and is associated with inter-observer and inter-institutional variability. Methods for reconstruction o