Computer Monte Carlo simulation in quantitative resource estimation
Root, D.H.; Menzie, W.D.; Scott, W.A.
1992-01-01
The method of making quantitative assessments of mineral resources sufficiently detailed for economic analysis is outlined in three steps. The steps are (1) determination of types of deposits that may be present in an area, (2) estimation of the numbers of deposits of the permissible deposit types, and (3) combination by Monte Carlo simulation of the estimated numbers of deposits with the historical grades and tonnages of these deposits to produce a probability distribution of the quantities of contained metal. Two examples of the estimation of the number of deposits (step 2) are given. The first example is for mercury deposits in southwestern Alaska and the second is for lode tin deposits in the Seward Peninsula. The flow of the Monte Carlo simulation program is presented with particular attention to the dependencies between grades and tonnages of deposits and between grades of different metals in the same deposit. ?? 1992 Oxford University Press.
Estimation of flow accumulation uncertainty by Monte Carlo stochastic simulations
Directory of Open Access Journals (Sweden)
Višnjevac Nenad
2013-01-01
Full Text Available Very often, outputs provided by GIS functions and analysis are assumed as exact results. However, they are influenced by certain uncertainty which may affect the decisions based on those results. It is very complex and almost impossible to calculate that uncertainty using classical mathematical models because of very complex algorithms that are used in GIS analyses. In this paper we discuss an alternative method, i.e. the use of stochastic Monte Carlo simulations to estimate the uncertainty of flow accumulation. The case study area included the broader area of the Municipality of Čačak, where Monte Carlo stochastic simulations were applied in order to create one hundred possible outputs of flow accumulation. A statistical analysis was performed on the basis of these versions, and the "most likely" version of flow accumulation in association with its confidence bounds (standard deviation was created. Further, this paper describes the most important phases in the process of estimating uncertainty, such as variogram modelling and chooses the right number of simulations. Finally, it makes suggestions on how to effectively use and discuss the results and their practical significance.
Estimation of flow accumulation uncertainty by Monte Carlo stochastic simulations
Višnjevac Nenad; Cvijetinović Željko; Bajat Branislav; Radić Boris; Ristić Ratko; Milčanović Vukašin
2013-01-01
Very often, outputs provided by GIS functions and analysis are assumed as exact results. However, they are influenced by certain uncertainty which may affect the decisions based on those results. It is very complex and almost impossible to calculate that uncertainty using classical mathematical models because of very complex algorithms that are used in GIS analyses. In this paper we discuss an alternative method, i.e. the use of stochastic Monte Carlo simul...
Monte Carlo simulation for the estimation of iron in human whole ...
Indian Academy of Sciences (India)
2017-02-10
Feb 10, 2017 ... Abstract. Monte Carlo N-particle (MCNP) code has been used to simulate the transport of gamma photon rays of different energies (22, 31, 59.5 and 81 keV) to estimate the iron content in solutions. In this study, MCNP simulation results are compared with experiment and XCOM theoretical data.
Blom, H.A.P.; Krystul, J.; Bakker, G.J.
2006-01-01
We study the problem of estimating small reachability probabilities for large scale stochastic hybrid processes through Sequential Monte Carlo (SMC) simulation. Recently, [Cerou et al., 2002, 2005] developed an SMC approach for diffusion processes, and referred to the resulting SMC algorithm as an
Optical skin assessment based on spectral reflectance estimation and Monte Carlo simulation
Bauer, Jacob R.; Hardeberg, Jon Y.; Verdaasdonk, Rudolf
2017-02-01
Optical non-contact measurements in general, and chromophore concentration estimation in particular, have been identified to be useful tools for skin assessment. Spectral estimation using a low cost hand held device has not been studied adequately as a basis for skin assessment. Spectral measurements on the one hand, which require bulky, expensive and complex devices and direct channel approaches on the other hand, which operate with simple optical devices have been considered and applied for skin assessment. In this study, we analyse the capabilities of spectral estimation for skin assessment in form of chromophore concentration estimation using a prototypical low cost optical non-contact device. A spectral estimation work flow is implemented and combined with pre-simulated Monte Carlo spectra to use estimated spectra based on conventional image sensors for chromophore concentrations estimation and obtain health metrics. To evaluate the proposed approach, we performed a series of occlusion experiments and examined the capabilities of the proposed process. Additionally, the method has been applied to more general skin assessment tasks. The proposed process provides a more general representation in form of a spectral image cube which can be used for more advanced analysis and the comparisons show good agreement with expectations and conventional skin assessment methods. Utilising spectral estimation in conjunction with Monte Carlo simulation could lead to low cost, easy to use, hand held and multifunctional optical skin assessment with the possibility to improve skin assessment and the diagnosis of diseases.
Directory of Open Access Journals (Sweden)
Jae Phil Park
2016-06-01
Full Text Available The typical experimental procedure for testing stress corrosion cracking initiation involves an interval-censored reliability test. Based on these test results, the parameters of a Weibull distribution, which is a widely accepted crack initiation model, can be estimated using maximum likelihood estimation or median rank regression. However, it is difficult to determine the appropriate number of test specimens and censoring intervals required to obtain sufficiently accurate Weibull estimators. In this study, we compare maximum likelihood estimation and median rank regression using a Monte Carlo simulation to examine the effects of the total number of specimens, test duration, censoring interval, and shape parameters of the true Weibull distribution on the estimator uncertainty. Finally, we provide the quantitative uncertainties of both Weibull estimators, compare them with the true Weibull parameters, and suggest proper experimental conditions for developing a probabilistic crack initiation model through crack initiation tests.
Park, Jae Phil; Bahn, Chi Bum
2016-06-27
The typical experimental procedure for testing stress corrosion cracking initiation involves an interval-censored reliability test. Based on these test results, the parameters of a Weibull distribution, which is a widely accepted crack initiation model, can be estimated using maximum likelihood estimation or median rank regression. However, it is difficult to determine the appropriate number of test specimens and censoring intervals required to obtain sufficiently accurate Weibull estimators. In this study, we compare maximum likelihood estimation and median rank regression using a Monte Carlo simulation to examine the effects of the total number of specimens, test duration, censoring interval, and shape parameters of the true Weibull distribution on the estimator uncertainty. Finally, we provide the quantitative uncertainties of both Weibull estimators, compare them with the true Weibull parameters, and suggest proper experimental conditions for developing a probabilistic crack initiation model through crack initiation tests.
Energy Technology Data Exchange (ETDEWEB)
Burke, TImothy P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kiedrowski, Brian C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Martin, William R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-19
Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics for one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.
Energy Technology Data Exchange (ETDEWEB)
Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel; Fisher, Ryan; Tien, Chris; Simon, Steven L.; Bouville, Andre; Bolch, Wesley E. [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, Maryland 20852 (United States); Department of Nuclear Engineering, Kyung Hee University, Yongin 446-701 (Korea, Republic of); Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, Florida 32611 (United States); Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, Maryland 20852 (United States); Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, Florida 32611 (United States)
2011-03-15
Purpose: To develop a computed tomography (CT) organ dose estimation method designed to readily provide organ doses in a reference adult male and female for different scan ranges to investigate the degree to which existing commercial programs can reasonably match organ doses defined in these more anatomically realistic adult hybrid phantomsMethods: The x-ray fan beam in the SOMATOM Sensation 16 multidetector CT scanner was simulated within the Monte Carlo radiation transport code MCNPX2.6. The simulated CT scanner model was validated through comparison with experimentally measured lateral free-in-air dose profiles and computed tomography dose index (CTDI) values. The reference adult male and female hybrid phantoms were coupled with the established CT scanner model following arm removal to simulate clinical head and other body region scans. A set of organ dose matrices were calculated for a series of consecutive axial scans ranging from the top of the head to the bottom of the phantoms with a beam thickness of 10 mm and the tube potentials of 80, 100, and 120 kVp. The organ doses for head, chest, and abdomen/pelvis examinations were calculated based on the organ dose matrices and compared to those obtained from two commercial programs, CT-EXPO and CTDOSIMETRY. Organ dose calculations were repeated for an adult stylized phantom by using the same simulation method used for the adult hybrid phantom. Results: Comparisons of both lateral free-in-air dose profiles and CTDI values through experimental measurement with the Monte Carlo simulations showed good agreement to within 9%. Organ doses for head, chest, and abdomen/pelvis scans reported in the commercial programs exceeded those from the Monte Carlo calculations in both the hybrid and stylized phantoms in this study, sometimes by orders of magnitude. Conclusions: The organ dose estimation method and dose matrices established in this study readily provides organ doses for a reference adult male and female for different
DTI quality control assessment via error estimation from Monte Carlo simulations
Farzinfar, Mahshid; Li, Yin; Verde, Audrey R.; Oguz, Ipek; Gerig, Guido; Styner, Martin A.
2013-03-01
Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC.
Directory of Open Access Journals (Sweden)
Sugihara Robert T
2008-04-01
Full Text Available Abstract Background Plotless density estimators are those that are based on distance measures rather than counts per unit area (quadrats or plots to estimate the density of some usually stationary event, e.g. burrow openings, damage to plant stems, etc. These estimators typically use distance measures between events and from random points to events to derive an estimate of density. The error and bias of these estimators for the various spatial patterns found in nature have been examined using simulated populations only. In this study we investigated eight plotless density estimators to determine which were robust across a wide range of data sets from fully mapped field sites. They covered a wide range of situations including animal damage to rice and corn, nest locations, active rodent burrows and distribution of plants. Monte Carlo simulations were applied to sample the data sets, and in all cases the error of the estimate (measured as relative root mean square error was reduced with increasing sample size. The method of calculation and ease of use in the field were also used to judge the usefulness of the estimator. Estimators were evaluated in their original published forms, although the variable area transect (VAT and ordered distance methods have been the subjects of optimization studies. Results An estimator that was a compound of three basic distance estimators was found to be robust across all spatial patterns for sample sizes of 25 or greater. The same field methodology can be used either with the basic distance formula or the formula used with the Kendall-Moran estimator in which case a reduction in error may be gained for sample sizes less than 25, however, there is no improvement for larger sample sizes. The variable area transect (VAT method performed moderately well, is easy to use in the field, and its calculations easy to undertake. Conclusion Plotless density estimators can provide an estimate of density in situations where it
Estimation of crosstalk in LED fNIRS by photon propagation Monte Carlo simulation
Iwano, Takayuki; Umeyama, Shinji
2015-12-01
fNIRS (functional near-Infrared spectroscopy) can measure brain activity non-invasively and has advantages such as low cost and portability. While the conventional fNIRS has used laser light, LED light fNIRS is recently becoming common in use. Using LED for fNIRS, equipment can be more inexpensive and more portable. LED light, however, has a wider illumination spectrum than laser light, which may change crosstalk between the calculated concentration change of oxygenated and deoxygenated hemoglobins. The crosstalk is caused by difference in light path length in the head tissues depending on wavelengths used. We conducted Monte Carlo simulations of photon propagation in the tissue layers of head (scalp, skull, CSF, gray matter, and white matter) to estimate the light path length in each layers. Based on the estimated path lengths, the crosstalk in fNIRS using LED light was calculated. Our results showed that LED light more increases the crosstalk than laser light does when certain combinations of wavelengths were adopted. Even in such cases, the crosstalk increased by using LED light can be effectively suppressed by replacing the value of extinction coefficients used in the hemoglobin calculation to their weighted average over illumination spectrum.
Hong-Ghi Min
2011-01-01
Using Monte Carlo simulation of the Portfolio-balance model of the exchange rates, we report finite sample properties of the GMM estimator for testing over-identifying restrictions in the simultaneous equations model. F-form of Sargans statistic performs better than its chi-squared form while Hansens GMM statistic has the smallest bias.
Lee, Taewoong; Lee, Hyounggun; Kim, Younghak; Lee, Wonho
2017-07-01
The performance of a Compton imager using a single three-dimensional position-sensitive LYSO scintillator detector was estimated using a Monte Carlo simulation. The Compton imager consisted of a single LYSO scintillator with a pixelized structure. The size of the scintillator and each pixel were 1.3 × 1.3 × 1.3 cm3 and 0.3 × 0.3 × 0.3 cm3, respectively. The order of γ-ray interactions was determined based on the deposited energies in each detector. After the determination of the interaction sequence, various types of reconstruction algorithms such as simple back-projection, filtered back-projection, and list-mode maximum-likelihood expectation maximization (LM-MLEM) were applied and compared with each other in terms of their angular resolution and signal-to-noise ratio (SNR) for several γ-ray energies. The LM-MLEM reconstruction algorithm exhibited the best performance for Compton imaging in maintaining high angular resolution and SNR. The two sources of 137Cs (662 keV) could be distinguishable if they were more than 17° apart. The reconstructed Compton images showed the precise position and distribution of various radiation isotopes, which demonstrated the feasibility of the monitoring of nuclear materials in homeland security and radioactive waste management applications.
Energy Technology Data Exchange (ETDEWEB)
Lee, Taewoong; Lee, Hyounggun; Kim, Younghak; Lee, Wonho [Korea University, Seoul (Korea, Republic of)
2017-07-15
The performance of a Compton imager using a single three-dimensional position-sensitive LYSO scintillator detector was estimated using a Monte Carlo simulation. The Compton imager consisted of a single LYSO scintillator with a pixelized structure. The size of the scintillator and each pixel were 1.3 × 1.3 × 1.3 cm{sup 3} and 0.3 × 0.3 × 0.3 cm{sup 3}, respectively. The order of γ-ray interactions was determined based on the deposited energies in each detector. After the determination of the interaction sequence, various types of reconstruction algorithms such as simple back-projection, filtered back-projection, and list-mode maximum-likelihood expectation maximization (LM-MLEM) were applied and compared with each other in terms of their angular resolution and signal-tonoise ratio (SNR) for several γ-ray energies. The LM-MLEM reconstruction algorithm exhibited the best performance for Compton imaging in maintaining high angular resolution and SNR. The two sources of {sup 137}Cs (662 keV) could be distinguishable if they were more than 17 ◦ apart. The reconstructed Compton images showed the precise position and distribution of various radiation isotopes, which demonstrated the feasibility of the monitoring of nuclear materials in homeland security and radioactive waste management applications.
IVF cycle cost estimation using Activity Based Costing and Monte Carlo simulation.
Cassettari, Lucia; Mosca, Marco; Mosca, Roberto; Rolando, Fabio; Costa, Mauro; Pisaturo, Valerio
2016-03-01
The Authors present a new methodological approach in stochastic regime to determine the actual costs of an healthcare process. The paper specifically shows the application of the methodology for the determination of the cost of an Assisted reproductive technology (ART) treatment in Italy. The reason of this research comes from the fact that deterministic regime is inadequate to implement an accurate estimate of the cost of this particular treatment. In fact the durations of the different activities involved are unfixed and described by means of frequency distributions. Hence the need to determine in addition to the mean value of the cost, the interval within which it is intended to vary with a known confidence level. Consequently the cost obtained for each type of cycle investigated (in vitro fertilization and embryo transfer with or without intracytoplasmic sperm injection), shows tolerance intervals around the mean value sufficiently restricted as to make the data obtained statistically robust and therefore usable also as reference for any benchmark with other Countries. It should be noted that under a methodological point of view the approach was rigorous. In fact it was used both the technique of Activity Based Costing for determining the cost of individual activities of the process both the Monte Carlo simulation, with control of experimental error, for the construction of the tolerance intervals on the final result.
On the predictivity of pore-scale simulations: estimating uncertainties with multilevel Monte Carlo
Icardi, Matteo
2016-02-08
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another “equivalent” sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [2015. https://bitbucket.org/micardi/porescalemc.], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers
On the predictivity of pore-scale simulations: Estimating uncertainties with multilevel Monte Carlo
Icardi, Matteo; Boccardo, Gianluca; Tempone, Raúl
2016-09-01
A fast method with tunable accuracy is proposed to estimate errors and uncertainties in pore-scale and Digital Rock Physics (DRP) problems. The overall predictivity of these studies can be, in fact, hindered by many factors including sample heterogeneity, computational and imaging limitations, model inadequacy and not perfectly known physical parameters. The typical objective of pore-scale studies is the estimation of macroscopic effective parameters such as permeability, effective diffusivity and hydrodynamic dispersion. However, these are often non-deterministic quantities (i.e., results obtained for specific pore-scale sample and setup are not totally reproducible by another ;equivalent; sample and setup). The stochastic nature can arise due to the multi-scale heterogeneity, the computational and experimental limitations in considering large samples, and the complexity of the physical models. These approximations, in fact, introduce an error that, being dependent on a large number of complex factors, can be modeled as random. We propose a general simulation tool, based on multilevel Monte Carlo, that can reduce drastically the computational cost needed for computing accurate statistics of effective parameters and other quantities of interest, under any of these random errors. This is, to our knowledge, the first attempt to include Uncertainty Quantification (UQ) in pore-scale physics and simulation. The method can also provide estimates of the discretization error and it is tested on three-dimensional transport problems in heterogeneous materials, where the sampling procedure is done by generation algorithms able to reproduce realistic consolidated and unconsolidated random sphere and ellipsoid packings and arrangements. A totally automatic workflow is developed in an open-source code [1], that include rigid body physics and random packing algorithms, unstructured mesh discretization, finite volume solvers, extrapolation and post-processing techniques. The
Estimating the probability of recontamination via the air using Monte Carlo simulations.
den Aantrekker, Esther D; Beumer, Rijkelt R; van Gerwen, Suzanne J C; Zwietering, Marcel H; van Schothorst, Mick; Boom, Remko M
2003-10-15
Recontamination of food products can cause foodborne illnesses or spoilage of foods. It is therefore useful to quantify this recontamination so that it can be incorporated in microbiological risk assessments (MRA). This paper describes a first attempt to quantify one of the recontamination routes: via the air. Data on the number of airborne microorganisms were collected from literature and industries. The settling velocities of different microorganisms were calculated for different products by combining the data on aerial concentrations with sedimentation counts assuming that settling is under the influence of gravity only. Air movement is not explicitly considered in this study. Statistical analyses were performed to clarify the effect of different products and seasons on the number of airborne microorganisms and the settling velocity. For both bacteria and moulds, three significantly different product categories with regard to the level of airborne organisms were identified. The statistical distribution in these categories was described by a lognormal distribution. The settling velocity did not depend on the product, the season of sampling or the type of microorganism, and had a geometrical mean value of 2.7 mm/s. The statistical distribution of the settling velocity was described by a lognormal distribution as well. The probability of recontamination via the air was estimated by the product of the number of bacteria in the air, the settling velocity, and the exposed area and time of the product. For three example products, the contamination level as a result of airborne recontamination was estimated using Monte Carlo simulations. What-if scenarios were used to exemplify determination of design criteria to control a specified contamination level.
Energy Technology Data Exchange (ETDEWEB)
Lee, Choonsik; Kim, Kwang Pyo; Long, Daniel J.; Bolch, Wesley E. [Division of Cancer Epidemiology and Genetics, National Cancer Institute, National Institute of Health, Bethesda, Maryland 20852 (United States); Department of Nuclear Engineering, Kyung Hee University, Gyeonggi-do, 446906 (Korea, Republic of); J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611 (United States)
2012-04-15
Purpose: To establish an organ dose database for pediatric and adolescent reference individuals undergoing computed tomography (CT) examinations by using Monte Carlo simulation. The data will permit rapid estimates of organ and effective doses for patients of different age, gender, examination type, and CT scanner model. Methods: The Monte Carlo simulation model of a Siemens Sensation 16 CT scanner previously published was employed as a base CT scanner model. A set of absorbed doses for 33 organs/tissues normalized to the product of 100 mAs and CTDI{sub vol} (mGy/100 mAs mGy) was established by coupling the CT scanner model with age-dependent reference pediatric hybrid phantoms. A series of single axial scans from the top of head to the feet of the phantoms was performed at a slice thickness of 10 mm, and at tube potentials of 80, 100, and 120 kVp. Using the established CTDI{sub vol}- and 100 mAs-normalized dose matrix, organ doses for different pediatric phantoms undergoing head, chest, abdomen-pelvis, and chest-abdomen-pelvis (CAP) scans with the Siemens Sensation 16 scanner were estimated and analyzed. The results were then compared with the values obtained from three independent published methods: CT-Expo software, organ dose for abdominal CT scan derived empirically from patient abdominal circumference, and effective dose per dose-length product (DLP). Results: Organ and effective doses were calculated and normalized to 100 mAs and CTDI{sub vol} for different CT examinations. At the same technical setting, dose to the organs, which were entirely included in the CT beam coverage, were higher by from 40 to 80% for newborn phantoms compared to those of 15-year phantoms. An increase of tube potential from 80 to 120 kVp resulted in 2.5-2.9-fold greater brain dose for head scans. The results from this study were compared with three different published studies and/or techniques. First, organ doses were compared to those given by CT-Expo which revealed dose
Channon, H A; Hamilton, A J; D'Souza, D N; Dunshea, F R
2016-06-01
Monte Carlo simulation was investigated as a potential methodology to estimate sensory tenderness, flavour and juiciness scores of pork following the implementation of key pathway interventions known to influence eating quality. Correction factors were established using mean data from published studies investigating key production, processing and cooking parameters. Probability distributions of correction factors were developed for single pathway parameters only, due to lack of interaction data. Except for moisture infusion, ageing period, aitchbone hanging and cooking pork to an internal temperature of >74°C, only small shifts in the mean of the probability distributions of correction factors were observed for the majority of pathway parameters investigated in this study. Output distributions of sensory scores, generated from Monte Carlo simulations of input distributions of correction factors and for individual pigs, indicated that this methodology may be useful in estimating both the shift and variability in pork eating traits when different pathway interventions are applied. Copyright © 2016 Elsevier Ltd. All rights reserved.
Monte Carlo simulation for the estimation of iron in human whole ...
Indian Academy of Sciences (India)
The simulation shows that theobtained results are in good agreement with experimental data, and better than the theoretical XCOM values. The study indicates that MCNP simulation is an excellent tool to estimate the iron concentration in the blood samples. The MCNP code can also be utilized to estimate other trace ...
Oxygen transport properties estimation by classical trajectory-direct simulation Monte Carlo
Bruno, Domenico; Frezzotti, Aldo; Ghiroldi, Gian Pietro
2015-05-01
Coupling direct simulation Monte Carlo (DSMC) simulations with classical trajectory calculations is a powerful tool to improve predictive capabilities of computational dilute gas dynamics. The considerable increase in computational effort outlined in early applications of the method can be compensated by running simulations on massively parallel computers. In particular, Graphics Processing Unit acceleration has been found quite effective in reducing computing time of classical trajectory (CT)-DSMC simulations. The aim of the present work is to study dilute molecular oxygen flows by modeling binary collisions, in the rigid rotor approximation, through an accurate Potential Energy Surface (PES), obtained by molecular beams scattering. The PES accuracy is assessed by calculating molecular oxygen transport properties by different equilibrium and non-equilibrium CT-DSMC based simulations that provide close values of the transport properties. Comparisons with available experimental data are presented and discussed in the temperature range 300-900 K, where vibrational degrees of freedom are expected to play a limited (but not always negligible) role.
O'Hagan, Anthony; Stevenson, Matt; Madan, Jason
2007-10-01
Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially. John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Sinha A
2016-12-01
Full Text Available Background: Most preclinical studies are carried out on mice. For internal dose assessment of a mouse, specific absorbed fraction (SAF values play an important role. In most studies, SAF values are estimated using older standard human organ compositions and values for limited source target pairs. Objective: SAF values for monoenergetic photons of energies 15, 50, 100, 500, 1000 and 4000 keV were evaluated for the Digimouse voxel phantom incorporated in Monte Carlo code FLUKA. The organ sources considered in this study were lungs, skeleton, heart, bladder, testis, stomach, spleen, pancreas, liver, kidney, adrenal, eye and brain. The considered target organs were lungs, skeleton, heart, bladder, testis, stomach, spleen, pancreas, liver, kidney, adrenal and brain. Eye was considered as a target organ only for eye as a source organ. Organ compositions and densities were adopted from International Commission on Radiological Protection (ICRP publication number 110. Results: Evaluated organ masses and SAF values are presented in tabular form. It is observed that SAF values decrease with increasing the source-to-target distance. The SAF value for self-irradiation decreases with increasing photon energy. The SAF values are also found to be dependent on the mass of target in such a way that higher values are obtained for lower masses. The effect of composition is highest in case of target organ lungs where mass and estimated SAF values are found to have larger differences. Conclusion: These SAF values are very important for absorbed dose calculation for various organs of a mouse
Kim, Sangroh; Yoshizumi, Terry T; Toncheva, Greta; Frush, Donald P; Yin, Fang-Fang
2010-03-01
The purpose of this study was to establish a dose estimation tool with Monte Carlo (MC) simulations. A 5-y-old paediatric anthropomorphic phantom was computed tomography (CT) scanned to create a voxelised phantom and used as an input for the abdominal cone-beam CT in a BEAMnrc/EGSnrc MC system. An X-ray tube model of the Varian On-Board Imager((R)) was built in the MC system. To validate the model, the absorbed doses at each organ location for standard-dose and low-dose modes were measured in the physical phantom with MOSFET detectors; effective doses were also calculated. In the results, the MC simulations were comparable to the MOSFET measurements. This voxelised phantom approach could produce a more accurate dose estimation than the stylised phantom method. This model can be easily applied to multi-detector CT dosimetry.
Monte Carlo simulation of model Spin systemsr
Indian Academy of Sciences (India)
three~dimensional Ising models and Heisenberg models are dealt with in some detail. Recent applications of the Monte Carlo method to spin glass systems and to estimate renormalisation group critical exponents are reviewod. Keywords. _ Monte-carlo simulation; critical phenomena; Ising models; Heisenberg models ...
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2015-01-01
For non-linear systems the estimation of fatigue damage under stochastic loadings can be rather time-consuming. Usually Monte Carlo simulation (MCS) is applied, but the coefficient-of-variation (COV) can be large if only a small set of simulations can be done due to otherwise excessive CPU time...
Nagamine, Shuji; Fujibuchi, Toshioh; Umezu, Yoshiyuki; Himuro, Kazuhiko; Awamoto, Shinichi; Tsutsui, Yuji; Nakamura, Yasuhiko
2017-03-01
In this study, we estimated the ambient dose equivalent rate (hereafter "dose rate") in the fluoro-2-deoxy-D-glucose (FDG) administration room in our hospital using Monte Carlo simulations, and examined the appropriate medical-personnel locations and a shielding method to reduce the dose rate during FDG injection using a lead glass shield. The line source was assumed to be the FDG feed tube and the patient a cube source. The dose rate distribution was calculated with a composite source that combines the line and cube sources. The dose rate distribution was also calculated when a lead glass shield was placed in the rear section of the lead-acrylic shield. The dose rate behind the automatic administration device decreased by 87 % with respect to that behind the lead-acrylic shield. Upon positioning a 2.8-cm-thick lead glass shield, the dose rate behind the lead-acrylic shield decreased by 67 %.
A MONTE-CARLO METHOD FOR ESTIMATING THE CORRELATION EXPONENT
MIKOSCH, T; WANG, QA
We propose a Monte Carlo method for estimating the correlation exponent of a stationary ergodic sequence. The estimator can be considered as a bootstrap version of the classical Hill estimator. A simulation study shows that the method yields reasonable estimates.
Lee, Whanhee; Kim, Ho; Hwang, Sunghee; Zanobetti, Antonella; Schwartz, Joel D; Chung, Yeonseung
2017-09-07
Rich literature has reported that there exists a nonlinear association between temperature and mortality. One important feature in the temperature-mortality association is the minimum mortality temperature (MMT). The commonly used approach for estimating the MMT is to determine the MMT as the temperature at which mortality is minimized in the estimated temperature-mortality association curve. Also, an approximate bootstrap approach was proposed to calculate the standard errors and the confidence interval for the MMT. However, the statistical properties of these methods were not fully studied. Our research assessed the statistical properties of the previously proposed methods in various types of the temperature-mortality association. We also suggested an alternative approach to provide a point and an interval estimates for the MMT, which improve upon the previous approach if some prior knowledge is available on the MMT. We compare the previous and alternative methods through a simulation study and an application. In addition, as the MMT is often used as a reference temperature to calculate the cold- and heat-related relative risk (RR), we examined how the uncertainty in the MMT affects the estimation of the RRs. The previously proposed method of estimating the MMT as a point (indicated as Argmin2) may increase bias or mean squared error in some types of temperature-mortality association. The approximate bootstrap method to calculate the confidence interval (indicated as Empirical1) performs properly achieving near 95% coverage but the length can be unnecessarily extremely large in some types of the association. We showed that an alternative approach (indicated as Empirical2), which can be applied if some prior knowledge is available on the MMT, works better reducing the bias and the mean squared error in point estimation and achieving near 95% coverage while shortening the length of the interval estimates. The Monte Carlo simulation-based approach to estimate the
Directory of Open Access Journals (Sweden)
Md Nabiul Islam Khan
Full Text Available In the Point-Centred Quarter Method (PCQM, the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1 and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns plant populations and empirical ones.PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3 show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition. If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1/(π ∑ R2 but not 12N/(π ∑ R2, of PCQM2 is 4(8N - 1/(π ∑ R2 but not 28N/(π ∑ R2 and of PCQM3 is 4(12N - 1/(π ∑ R2 but not 44N/(π ∑ R2 as published.If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process
Khan, Md Nabiul Islam; Hijbeek, Renske; Berger, Uta; Koedam, Nico; Grueters, Uwe; Islam, S M Zahirul; Hasan, Md Asadul; Dahdouh-Guebas, Farid
2016-01-01
In the Point-Centred Quarter Method (PCQM), the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1) and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively) show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns) plant populations and empirical ones. PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3) show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition). If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1)/(π ∑ R2) but not 12N/(π ∑ R2), of PCQM2 is 4(8N - 1)/(π ∑ R2) but not 28N/(π ∑ R2) and of PCQM3 is 4(12N - 1)/(π ∑ R2) but not 44N/(π ∑ R2) as published. If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process
CSIR Research Space (South Africa)
Bidgood, Peter M
2017-01-01
Full Text Available to be an effective alternative. The long simulation times of DSMC, when applied using conventional sequential codes, has been addressed by re-formulating the code to run on a multiple CPU-GPU, platform. Simulation times spanning minutes have replaced those spanning...
Directory of Open Access Journals (Sweden)
S. Russo
2015-09-01
Full Text Available The aim of the study is to estimate the pension costs incurred for patients with musculoskeletal disorders (MDs and specifi cally with rheumatoid arthritis (RA and ankylosing spondylitis (AS in Italy between 2009 and 2012. We analyzed the database of the Italian National Social Security Institute (Istituto Nazionale Previdenza Sociale i.e. INPS to estimate the total costs of three types of social security benefi ts granted to patients with MDs, RA and AS: disability benefi ts (for people with reduced working ability, disability pensions (for people who cannot qualify as workers and incapacity pensions (for people without working ability. We developed a probabilistic model with a Monte Carlo simulation to estimate the total costs for each type of benefi t associated with MDs, RA and AS. We also estimated the productivity loss resulting from RA in 2013. From 2009 to 2012 about 393 thousand treatments were paid for a total of approximately €2.7 billion. The annual number of treatments was on average 98 thousand and cost in total €674 million per year. In particular, the total pension burden was about €99 million for RA and €26 million for AS. The productivity loss for AR in 2013 was equal to €707,425,191 due to 9,174,221 working days lost. Our study is the fi rst to estimate the burden of social security pensions for MDs based on data of both approved claims and benefi ts paid by the national security system. From 2009 to 2012, in Italy, the highest indirect costs were associated with disability pensions (54% of the total indirect cost, followed by disability benefi ts (44.1% of cost and incapacity pensions (1.8% of cost. In conclusion, MDs are chronic and highly debilitating diseases with a strong female predominance and very signifi cant economic and social costs that are set to increase due to the aging of the population.
Energy Technology Data Exchange (ETDEWEB)
Kanjilal, Oindrila, E-mail: oindrila@civil.iisc.ernet.in; Manohar, C.S., E-mail: manohar@civil.iisc.ernet.in
2017-07-15
The study considers the problem of simulation based time variant reliability analysis of nonlinear randomly excited dynamical systems. Attention is focused on importance sampling strategies based on the application of Girsanov's transformation method. Controls which minimize the distance function, as in the first order reliability method (FORM), are shown to minimize a bound on the sampling variance of the estimator for the probability of failure. Two schemes based on the application of calculus of variations for selecting control signals are proposed: the first obtains the control force as the solution of a two-point nonlinear boundary value problem, and, the second explores the application of the Volterra series in characterizing the controls. The relative merits of these schemes, vis-à-vis the method based on ideas from the FORM, are discussed. Illustrative examples, involving archetypal single degree of freedom (dof) nonlinear oscillators, and a multi-degree of freedom nonlinear dynamical system, are presented. The credentials of the proposed procedures are established by comparing the solutions with pertinent results from direct Monte Carlo simulations. - Highlights: • The distance minimizing control forces minimize a bound on the sampling variance. • Establishing Girsanov controls via solution of a two-point boundary value problem. • Girsanov controls via Volterra's series representation for the transfer functions.
Kim, Sangroh; Yoshizumi, Terry; Toncheva, Greta; Yoo, Sua; Yin, Fang-Fang; Frush, Donald
2010-05-01
To address the lack of accurate dose estimation method in cone beam computed tomography (CBCT), we performed point dose metal oxide semiconductor field-effect transistor (MOSFET) measurements and Monte Carlo (MC) simulations. A Varian On-Board Imager (OBI) was employed to measure point doses in the polymethyl methacrylate (PMMA) CT phantoms with MOSFETs for standard and low dose modes. A MC model of the OBI x-ray tube was developed using BEAMnrc/EGSnrc MC system and validated by the half value layer, x-ray spectrum and lateral and depth dose profiles. We compared the weighted computed tomography dose index (CTDIw) between MOSFET measurements and MC simulations. The CTDIw was found to be 8.39 cGy for the head scan and 4.58 cGy for the body scan from the MOSFET measurements in standard dose mode, and 1.89 cGy for the head and 1.11 cGy for the body in low dose mode, respectively. The CTDIw from MC compared well to the MOSFET measurements within 5% differences. In conclusion, a MC model for Varian CBCT has been established and this approach may be easily extended from the CBCT geometry to multi-detector CT geometry.
Proton Upset Monte Carlo Simulation
O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.
2009-01-01
The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.
Dawn, Sandipan; Bakshi, A K; Sathian, Deepa; Selvam, T Palani
2017-06-15
Neutron scatter contributions as a function of distance along the transverse axis of 241Am-Be source were estimated by three different methods such as shadow cone, semi-empirical and Monte Carlo. The Monte Carlo-based FLUKA code was used to simulate the existing room used for the calibration of CR-39 detector as well as LB6411 doseratemeter for selected distances from 241Am-Be source. The modified 241Am-Be spectra at different irradiation geometries such as at different source detector distances, behind the shadow cone, at the surface of the water phantom were also evaluated using Monte Carlo calculations. Neutron scatter contributions, estimated using three different methods compare reasonably well. It is proposed to use the scattering correction factors estimated through Monte Carlo simulation and other methods for the calibration of CR-39 detector and doseratemeter at 0.75 and 1 m distance from the source. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Hafeez Allan Agboola
2014-01-01
Pyrolysis of hydrocarbons and catalytic reforming of naphtha are important processes in petroleum refineries and petrochemical industries as they lead to production of light olefins, high octane gasoline, aromatics and so on. Thus, it is important to investigate their chemical kinetics in order to establish rate expressions or models for their reactions. In this research work, Monte-Carlo Simulation was applied to estimate kinetic parameters of two complex reactions: pyrolysis of n-Eicosane a...
Monte Carlo Simulation of an American Option
Directory of Open Access Journals (Sweden)
Gikiri Thuo
2007-04-01
Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.
Monte Carlo Solutions for Blind Phase Noise Estimation
Directory of Open Access Journals (Sweden)
Çırpan Hakan
2009-01-01
Full Text Available This paper investigates the use of Monte Carlo sampling methods for phase noise estimation on additive white Gaussian noise (AWGN channels. The main contributions of the paper are (i the development of a Monte Carlo framework for phase noise estimation, with special attention to sequential importance sampling and Rao-Blackwellization, (ii the interpretation of existing Monte Carlo solutions within this generic framework, and (iii the derivation of a novel phase noise estimator. Contrary to the ad hoc phase noise estimators that have been proposed in the past, the estimators considered in this paper are derived from solid probabilistic and performance-determining arguments. Computer simulations demonstrate that, on one hand, the Monte Carlo phase noise estimators outperform the existing estimators and, on the other hand, our newly proposed solution exhibits a lower complexity than the existing Monte Carlo solutions.
Metropolis Methods for Quantum Monte Carlo Simulations
Ceperley, D.M.
2003-01-01
Since its first description fifty years ago, the Metropolis Monte Carlo method has been used in a variety of different ways for the simulation of continuum quantum many-body systems. This paper will consider some of the generalizations of the Metropolis algorithm employed in quantum Monte Carlo: Variational Monte Carlo, dynamical methods for projector monte carlo ({\\it i.e.} diffusion Monte Carlo with rejection), multilevel sampling in path integral Monte Carlo, the sampling of permutations, ...
Vrugt, J. A.
2011-04-01
Formal and informal Bayesian approaches are increasingly being used to treat forcing, model structural, parameter and calibration data uncertainty, and summarize hydrologic prediction uncertainty. This requires posterior sampling methods that approximate the (evolving) posterior distribution. We recently introduced the DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm, an adaptive Markov Chain Monte Carlo (MCMC) method that is especially designed to solve complex, high-dimensional and multimodal posterior probability density functions. The method runs multiple chains in parallel, and maintains detailed balance and ergodicity. Here, I present the latest algorithmic developments, and introduce a discrete sampling variant of DREAM that samples the parameter space at fixed points. The development of this new code, DREAM(D), has been inspired by the existing class of integer optimization problems, and emerging class of experimental design problems. Such non-continuous parameter estimation problems are of considerable theoretical and practical interest. The theory developed herein is applicable to DREAM(ZS) (Vrugt et al., 2011) and MT-DREAM(ZS) (Laloy and Vrugt, 2011) as well. Two case studies involving a sudoku puzzle and rainfall - runoff model calibration problem are used to illustrate DREAM(D).
Directory of Open Access Journals (Sweden)
Miguel Arias Albornoz
2008-09-01
Full Text Available En este trabajo se aplica el método de simulación de Monte Carlo (MC para estimar el número de depresiones rápidas de tensión (dips esperados en barras de una red eléctrica. Las estimaciones obtenidas a través de MC se comparan con los resultados de otro método de cálculo conocido como Método de Posiciones de Falla (MPF. Entre los resultados se muestra tanto la convergencia del algoritmo MC a los valores de largo plazo del método MPF como la distribución completa de frecuencias para diferentes eventos, lo cual representa información valiosa para apoyar la toma de decisiones sobre el empleo de equipos sensibles a este tipo de perturbación.In this work, the Monte Carlo simulation method (MC is applied to estimate the number of expected voltage dips in the nodes of an electric network. The estimations obtained through MC are compared with the results of another method of calculation, known as Failure Position Method (MPF. In the results, both the convergence of the algorithm with the long-term values of the MPF method and the complete distribution of frequencies for different events are shown. This represents valuable information to support the decision-making process for equipment that is sensitive to this type of perturbation.
Mean field simulation for Monte Carlo integration
Del Moral, Pierre
2013-01-01
In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko
Failure Probability Estimation of Wind Turbines by Enhanced Monte Carlo
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, Søren R.K.; Naess, Arvid
2012-01-01
This paper discusses the estimation of the failure probability of wind turbines required by codes of practice for designing them. The Standard Monte Carlo (SMC) simulations may be used for this reason conceptually as an alternative to the popular Peaks-Over-Threshold (POT) method. However......, estimation of very low failure probabilities with SMC simulations leads to unacceptably high computational costs. In this study, an Enhanced Monte Carlo (EMC) method is proposed that overcomes this obstacle. The method has advantages over both POT and SMC in terms of its low computational cost and accuracy...... is controlled by the pitch controller. This provides a fair framework for comparison of the behavior and failure event of the wind turbine with emphasis on the effect of the pitch controller. The Enhanced Monte Carlo method is then applied to the model and the failure probabilities of the model are estimated...
Monte carlo simulation for soot dynamics
Zhou, Kun
2012-01-01
A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.
Path integral Monte Carlo simulations of silicates
Rickwardt, Chr.; Nielaba, P.; Müser, M. H.; Binder, K.
2000-01-01
We investigate the thermal expansion of crystalline SiO$_2$ in the $\\beta$-- cristobalite and the $\\beta$-quartz structure with path integral Monte Carlo (PIMC) techniques. This simulation method allows to treat low-temperature quantum effects properly. At temperatures below the Debye temperature, thermal properties obtained with PIMC agree better with experimental results than those obtained with classical Monte Carlo methods.
Monte Carlo Simulation of Phase Transitions
村井, 信行; N., MURAI; 中京大学教養部
1983-01-01
In the Monte Carlo simulation of phase transition, a simple heat bath method is applied to the classical Heisenberg model in two dimensions. It reproduces the correlation length predicted by the Monte Carlo renor-malization group and also computed in the non-linear σ model
A study on the shielding element using Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Kim, Ki Jeong [Dept. of Radiology, Konkuk University Medical Center, Seoul (Korea, Republic of); Shim, Jae Goo [Dept. of Radiologic Technology, Daegu Health College, Daegu (Korea, Republic of)
2017-06-15
In this research, we simulated the elementary star shielding ability using Monte Carlo simulation to apply medical radiation shielding sheet which can replace existing lead. In the selection of elements, mainly elements and metal elements having a large atomic number, which are known to have high shielding performance, recently, various composite materials have improved shielding performance, so that weight reduction, processability, In consideration of activity etc., 21 elements were selected. The simulation tools were utilized Monte Carlo method. As a result of simulating the shielding performance by each element, it was estimated that the shielding ratio is the highest at 98.82% and 98.44% for tungsten and gold.
K-Antithetic Variates in Monte Carlo Simulation | Nasroallah | Afrika ...
African Journals Online (AJOL)
Abstract. Standard Monte Carlo simulation needs prohibitive time to achieve reasonable estimations. for untractable integrals (i.e. multidimensional integrals and/or intergals with complex integrand forms). Several statistical technique, called variance reduction methods, are used to reduce the simulation time. In this note ...
Monte Carlo Simulation Program from the World Petroleum Assessment 2000, DDS-60 (Emc2.xls).
U.S. Geological Survey, Department of the Interior — Monte Carlo programs described in chapter MC, Monte Carlo Simulation Method. Emc2.xls was the program used to calculate the estimates of undiscovered resources for...
Dai, Yunyun
2013-01-01
Mixtures of item response theory (IRT) models have been proposed as a technique to explore response patterns in test data related to cognitive strategies, instructional sensitivity, and differential item functioning (DIF). Estimation proves challenging due to difficulties in identification and questions of effect size needed to recover underlying…
Utilising Monte Carlo Simulation for the Valuation of Mining Concessions
Directory of Open Access Journals (Sweden)
Rosli Said
2005-12-01
Full Text Available Valuation involves the analyses of various input data to produce an estimated value. Since each input is itself often an estimate, there is an element of uncertainty in the input. This leads to uncertainty in the resultant output value. It is argued that a valuation must also convey information on the uncertainty, so as to be more meaningful and informative to the user. The Monte Carlo simulation technique can generate the information on uncertainty and is therefore potentially useful to valuation. This paper reports on the investigation that has been conducted to apply Monte Carlo simulation technique in mineral valuation, more specifically, in the valuation of a quarry concession.
Adaptive Multilevel Monte Carlo Simulation
Hoel, H
2011-08-23
This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).
Energy Technology Data Exchange (ETDEWEB)
Yakoumakis, Emmanuel; Kostopoulou, Helen; Dimitriadis, Anastastios; Georgiou, Evaggelos [University of Athens, Medical Physics Department, Medical School, Athens (Greece); Makri, Triantafilia [' Agia Sofia' Hospital, Medical Physics Unit, Athens (Greece); Tsalafoutas, Ioannis [Anticancer-Oncology Hospital of Athens ' Agios Savvas' , Medical Physics Department, Athens (Greece)
2013-03-15
Children diagnosed with congenital heart disease often undergo cardiac catheterization for their treatment, which involves the use of ionizing radiation and therefore a risk of radiation-induced cancer. The purpose of this study was to calculate the effective and equivalent organ doses (H{sub T}) in those children and estimate the risk of exposure-induced death. Fifty-three children were divided into three groups: atrial septal defect (ASD), ventricular septal defect (VSD) and patent ductus arteriosus (PDA). In all procedures, the exposure conditions and the dose-area product meters readings were recorded for each individual acquisition. Monte Carlo simulations were run using the PCXMC 2.0 code and mathematical phantoms simulating a child's anatomy. The H{sub T} values to all irradiated organs and the resulting E and risk of exposure-induced death values were calculated. The average dose-area product values were, respectively, 40 {+-} 12 Gy.cm{sup 2} for the ASD, 17.5 {+-} 0.7 Gy.cm{sup 2} for the VSD and 9.5 {+-} 1 Gy.cm{sup 2} for the PDA group. The average E values were 40 {+-} 12, 22 {+-} 2.5 and 17 {+-} 3.6 mSv for ASD, VSD and PDA groups, respectively. The respective estimated risk of exposure-induced death values per procedure were 0.109, 0.106 and 0.067%. Cardiac catheterizations in children involve a considerable risk for radiation-induced cancer that has to be further reduced. (orig.)
Simulation and the Monte Carlo method
Rubinstein, Reuven Y
2016-01-01
Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...
Legchenko, Anatoly; Comte, Jean-Christophe; Ofterdinger, Ulrich; Vouillamoz, Jean-Michel; Lawson, Fabrice Messan Amen; Walsh, John
2017-09-01
We propose a simple and robust approach for investigating uncertainty in the results of inversion in geophysics. We apply this approach to inversion of Surface Nuclear Magnetic Resonance (SNMR) data, which is also known as Magnetic Resonance Sounding (MRS). Solution of this inverse problem is known to be non-unique. We inverse MRS data using the well-known Tikhonov regularization method, which provides an optimal solution as a trade-off between the stability and accuracy. Then, we perturb this model by random values and compute the fitting error for the perturbed models. The magnitude of these perturbations is limited by the uncertainty estimated with the singular value decomposition (SVD) and taking into account experimental errors. We use 106 perturbed models and show that the large majority of these models, which have all the water content within the variations given by the SVD estimate, do not fit data with an acceptable accuracy. Thus, we may limit the solution space by only the equivalent inverse models that fit data with the accuracy close to that of the initial inverse model. For representing inversion results, we use three equivalent solutions instead of the only one: the ;best; solution given by the regularization or other inversion technic and the extreme variations of this solution corresponding to the equivalent models with the minimum and the maximum volume of water. For demonstrating our approach, we use synthetic data sets and experimental data acquired in the framework of investigation of a hard rock aquifer in the Ireland (County Donegal).
Dynamic bounds coupled with Monte Carlo simulations
Rajabali Nejad, Mohammadreza; Meester, L.E.; van Gelder, P.H.A.J.M.; Vrijling, J.K.
2011-01-01
For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper
INTELLECTUAL CAPITAL VALUATION USING MONTE CARLO SIMULATION
Tarnóczi Tibor; Tóth Réka; Fenyves Veronika
2010-01-01
We present a simulation model in this paper to determine the value of intellectual capital. In frame of the simulation model we have used the Baruch Lev’s intellectual capital valuation modell.We have built in the Baruch Lev model in a two-dimensional Monte Carlo simulation modell. We have determined the intellectual capital in case of some stock exchange company. The calculation are presented in case of a selected company.
INTELLECTUAL CAPITAL VALUATION USING MONTE CARLO SIMULATION
Directory of Open Access Journals (Sweden)
Tarnóczi Tibor
2010-07-01
Full Text Available We present a simulation model in this paper to determine the value of intellectual capital. In frame of the simulation model we have used the Baruch Lev’s intellectual capital valuation modell.We have built in the Baruch Lev model in a two-dimensional Monte Carlo simulation modell. We have determined the intellectual capital in case of some stock exchange company. The calculation are presented in case of a selected company.
Conical Reflection in Direct Simulation Monte Carlo
Sampson, Andrew; Payne, Adam; Somers, William; Spencer, Ross
2006-10-01
Fenix is a particle-in-cell simulation, using a Direct Simulation Monte Carlo method, and is aimed to improve the accuracy of Inductively Coupled Plasma Mass Spectrometry (ICP-MS). It currently focuses on the ICP-MS first expansion region through a supersonic nozzle in cylindrical symmetry. Due to increased complexity in Fenix, it has become necessary to solve the general conical surface reflection problem. The previous method, the new solution, and results from the enhanced simulation will be presented.
Testing Dependent Correlations with Nonoverlapping Variables: A Monte Carlo Simulation
Silver, N. Clayton; Hittner, James B.; May, Kim
2004-01-01
The authors conducted a Monte Carlo simulation of 4 test statistics or comparing dependent correlations with no variables in common. Empirical Type 1 error rates and power estimates were determined for K. Pearson and L. N. G. Filon's (1898) z, O. J. Dunn and V. A. Clark's (1969) z, J. H. Steiger's (1980) original modification of Dunn and Clark's…
Monte Carlo simulation by computer for life-cycle costing
Gralow, F. H.; Larson, W. J.
1969-01-01
Prediction of behavior and support requirements during the entire life cycle of a system enables accurate cost estimates by using the Monte Carlo simulation by computer. The system reduces the ultimate cost to the procuring agency because it takes into consideration the costs of initial procurement, operation, and maintenance.
Monte Carlo Simulation for Particle Detectors
Pia, Maria Grazia
2012-01-01
Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...
Monte Carlo Simulation in Statistical Physics An Introduction
Binder, Kurt
2010-01-01
Monte Carlo Simulation in Statistical Physics deals with the computer simulation of many-body systems in condensed-matter physics and related fields of physics, chemistry and beyond, to traffic flows, stock market fluctuations, etc.). Using random numbers generated by a computer, probability distributions are calculated, allowing the estimation of the thermodynamic properties of various systems. This book describes the theoretical background to several variants of these Monte Carlo methods and gives a systematic presentation from which newcomers can learn to perform such simulations and to analyze their results. The fifth edition covers Classical as well as Quantum Monte Carlo methods. Furthermore a new chapter on the sampling of free-energy landscapes has been added. To help students in their work a special web server has been installed to host programs and discussion groups (http://wwwcp.tphys.uni-heidelberg.de). Prof. Binder was awarded the Berni J. Alder CECAM Award for Computational Physics 2001 as well ...
Monte Carlo simulation code modernization
CERN. Geneva
2015-01-01
The continual development of sophisticated transport simulation algorithms allows increasingly accurate description of the effect of the passage of particles through matter. This modelling capability finds applications in a large spectrum of fields from medicine to astrophysics, and of course HEP. These new capabilities however come at the cost of a greater computational intensity of the new models, which has the effect of increasing the demands of computing resources. This is particularly true for HEP, where the demand for more simulation are driven by the need of both more accuracy and more precision, i.e. better models and more events. Usually HEP has relied on the "Moore's law" evolution, but since almost ten years the increase in clock speed has withered and computing capacity comes in the form of hardware architectures of many-core or accelerated processors. To harness these opportunities we need to adapt our code to concurrent programming models taking advantages of both SIMD and SIMT architectures. Th...
Monte Carlo simulations applied to conjunctival lymphoma radiotherapy treatment
Energy Technology Data Exchange (ETDEWEB)
Brualla, Lorenzo; Sauerwein, Wolfgang [Universitaetsklinikum Essen (Germany). NCTeam, Strahlenklinik; Palanco-Zamora, Ricardo [Karolinska University Hospital, Stockholm (Sweden); Steuhl, Klaus-Peter [Universitaetsklinikum Essen (Germany). Klinik fuer Erkrankungen des vorderen Augenabschnittes; Bornfeld, Norbert [Universitaetsklinikum Essen (Germany). Klinik fuer Erkrankungen des hinteren Augenabschnittes
2011-08-15
Small radiation fields are increasingly applied in clinical routine. In particular, they are necessary for the treatment of eye tumors. However, available treatment planning systems do not calculate the absorbed dose with the desired accuracy in the presence of small fields. Absorbed dose estimations obtained with Monte Carlo methods have the required accuracy for clinical applications, but the exceedingly long computation times associated with them hinder their routine use. In this article, a code for automatic Monte Carlo simulation of linacs and an application in the treatment of conjunctival lymphoma are presented. Simulations of clinical linear accelerators were performed with the general-purpose radiation transport Monte Carlo code penelope. Accelerator geometry files, in electron mode, were generated with the program AutolinaC. The Monte Carlo simulation of an annular electron 6 MeV field used for the treatment of the conjunctival lymphoma yielded absorbed dose results statistically compatible with experimental measurements. In this simulation, 2% standard statistical uncertainty was reached in the same time employed by a hybrid Monte Carlo commercial code (eMC); however, eMC showed discrepancies of up to 7% on the absorbed dose with respect to experimental data. Results obtained with the analytic algorithm Pencil Beam Convolution differed from experimental data by 10% for this case. Owing to the systematic application of variance-reduction techniques, it is possible to accurately estimate the absorbed dose in patient images, using Monte Carlo methods, in times within clinical routine requirements. The program AutolinaC allows systematic use of these variance-reduction techniques within the code penelope. (orig.)
Methods for Monte Carlo simulations of biomacromolecules.
Vitalis, Andreas; Pappu, Rohit V
2009-01-01
The state-of-the-art for Monte Carlo (MC) simulations of biomacromolecules is reviewed. Available methodologies for sampling conformational equilibria and associations of biomacromolecules in the canonical ensemble, given a continuum description of the solvent environment, are reviewed. Detailed sections are provided dealing with the choice of degrees of freedom, the efficiencies of MC algorithms and algorithmic peculiarities, as well as the optimization of simple movesets. The issue of introducing correlations into elementary MC moves, and the applicability of such methods to simulations of biomacromolecules is discussed. A brief discussion of multicanonical methods and an overview of recent simulation work highlighting the potential of MC methods are also provided. It is argued that MC simulations, while underutilized biomacromolecular simulation community, hold promise for simulations of complex systems and phenomena that span multiple length scales, especially when used in conjunction with implicit solvation models or other coarse graining strategies.
Mosaic crystal algorithm for Monte Carlo simulations
Seeger, P A
2002-01-01
An algorithm is presented for calculating reflectivity, absorption, and scattering of mosaic crystals in Monte Carlo simulations of neutron instruments. The algorithm uses multi-step transport through the crystal with an exact solution of the Darwin equations at each step. It relies on the kinematical model for Bragg reflection (with parameters adjusted to reproduce experimental data). For computation of thermal effects (the Debye-Waller factor and coherent inelastic scattering), an expansion of the Debye integral as a rapidly converging series of exponential terms is also presented. Any crystal geometry and plane orientation may be treated. The algorithm has been incorporated into the neutron instrument simulation package NISP. (orig.)
Stochastic simulation and Monte-Carlo methods; Simulation stochastique et methodes de Monte-Carlo
Energy Technology Data Exchange (ETDEWEB)
Graham, C. [Centre National de la Recherche Scientifique (CNRS), 91 - Gif-sur-Yvette (France); Ecole Polytechnique, 91 - Palaiseau (France); Talay, D. [Institut National de Recherche en Informatique et en Automatique (INRIA), 78 - Le Chesnay (France); Ecole Polytechnique, 91 - Palaiseau (France)
2011-07-01
This book presents some numerical probabilistic methods of simulation with their convergence speed. It combines mathematical precision and numerical developments, each proposed method belonging to a precise theoretical context developed in a rigorous and self-sufficient manner. After some recalls about the big numbers law and the basics of probabilistic simulation, the authors introduce the martingales and their main properties. Then, they develop a chapter on non-asymptotic estimations of Monte-Carlo method errors. This chapter gives a recall of the central limit theorem and precises its convergence speed. It introduces the Log-Sobolev and concentration inequalities, about which the study has greatly developed during the last years. This chapter ends with some variance reduction techniques. In order to demonstrate in a rigorous way the simulation results of stochastic processes, the authors introduce the basic notions of probabilities and of stochastic calculus, in particular the essential basics of Ito calculus, adapted to each numerical method proposed. They successively study the construction and important properties of the Poisson process, of the jump and deterministic Markov processes (linked to transport equations), and of the solutions of stochastic differential equations. Numerical methods are then developed and the convergence speed results of algorithms are rigorously demonstrated. In passing, the authors describe the probabilistic interpretation basics of the parabolic partial derivative equations. Non-trivial applications to real applied problems are also developed. (J.S.)
Morton, S E; Chiew, Y S; Pretty, C; Moltchanova, E; Scarrott, C; Redmond, D; Shaw, G M; Chase, J G
2017-02-01
Randomised control trials have sought to seek to improve mechanical ventilation treatment. However, few trials to date have shown clinical significance. It is hypothesised that aside from effective treatment, the outcome metrics and sample sizes of the trial also affect the significance, and thus impact trial design. In this study, a Monte-Carlo simulation method was developed and used to investigate several outcome metrics of ventilation treatment, including 1) length of mechanical ventilation (LoMV); 2) Ventilator Free Days (VFD); and 3) LoMV-28, a combination of the other metrics. As these metrics have highly skewed distributions, it also investigated the impact of imposing clinically relevant exclusion criteria on study power to enable better design for significance. Data from invasively ventilated patients from a single intensive care unit were used in this analysis to demonstrate the method. Use of LoMV as an outcome metric required 160 patients/arm to reach 80% power with a clinically expected intervention difference of 25% LoMV if clinically relevant exclusion criteria were applied to the cohort, but 400 patients/arm if they were not. However, only 130 patients/arm would be required for the same statistical significance at the same intervention difference if VFD was used. A Monte-Carlo simulation approach using local cohort data combined with objective patient selection criteria can yield better design of ventilation studies to desired power and significance, with fewer patients per arm than traditional trial design methods, which in turn reduces patient risk. Outcome metrics, such as VFD, should be used when a difference in mortality is also expected between the two cohorts. Finally, the non-parametric approach taken is readily generalisable to a range of trial types where outcome data is similarly skewed. Copyright © 2016. Published by Elsevier Inc.
Accelerated GPU based SPECT Monte Carlo simulations.
Garcia, Marie-Paule; Bert, Julien; Benoit, Didier; Bardiès, Manuel; Visvikis, Dimitris
2016-06-07
Monte Carlo (MC) modelling is widely used in the field of single photon emission computed tomography (SPECT) as it is a reliable technique to simulate very high quality scans. This technique provides very accurate modelling of the radiation transport and particle interactions in a heterogeneous medium. Various MC codes exist for nuclear medicine imaging simulations. Recently, new strategies exploiting the computing capabilities of graphical processing units (GPU) have been proposed. This work aims at evaluating the accuracy of such GPU implementation strategies in comparison to standard MC codes in the context of SPECT imaging. GATE was considered the reference MC toolkit and used to evaluate the performance of newly developed GPU Geant4-based Monte Carlo simulation (GGEMS) modules for SPECT imaging. Radioisotopes with different photon energies were used with these various CPU and GPU Geant4-based MC codes in order to assess the best strategy for each configuration. Three different isotopes were considered: (99m) Tc, (111)In and (131)I, using a low energy high resolution (LEHR) collimator, a medium energy general purpose (MEGP) collimator and a high energy general purpose (HEGP) collimator respectively. Point source, uniform source, cylindrical phantom and anthropomorphic phantom acquisitions were simulated using a model of the GE infinia II 3/8" gamma camera. Both simulation platforms yielded a similar system sensitivity and image statistical quality for the various combinations. The overall acceleration factor between GATE and GGEMS platform derived from the same cylindrical phantom acquisition was between 18 and 27 for the different radioisotopes. Besides, a full MC simulation using an anthropomorphic phantom showed the full potential of the GGEMS platform, with a resulting acceleration factor up to 71. The good agreement with reference codes and the acceleration factors obtained support the use of GPU implementation strategies for improving computational
Global Monte Carlo Simulation with High Order Polynomial Expansions
Energy Technology Data Exchange (ETDEWEB)
William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin
2007-12-13
The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as “local” piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi’s method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source
Monte Carlo simulations of medical imaging modalities
Energy Technology Data Exchange (ETDEWEB)
Estes, G.P. [Los Alamos National Lab., NM (United States)
1998-09-01
Because continuous-energy Monte Carlo radiation transport calculations can be nearly exact simulations of physical reality (within data limitations, geometric approximations, transport algorithms, etc.), it follows that one should be able to closely approximate the results of many experiments from first-principles computations. This line of reasoning has led to various MCNP studies that involve simulations of medical imaging modalities and other visualization methods such as radiography, Anger camera, computerized tomography (CT) scans, and SABRINA particle track visualization. It is the intent of this paper to summarize some of these imaging simulations in the hope of stimulating further work, especially as computer power increases. Improved interpretation and prediction of medical images should ultimately lead to enhanced medical treatments. It is also reasonable to assume that such computations could be used to design new or more effective imaging instruments.
Atomistic Monte Carlo simulation of lipid membranes
DEFF Research Database (Denmark)
Wüstner, Daniel; Sklenar, Heinz
2014-01-01
Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and bilayer patches...... of local-move MC methods in combination with molecular dynamics simulations, for example, for studying multi-component lipid membranes containing cholesterol....
Monte-Carlo Simulation Balancing in Practice
Huang, Shih-Chieh; Coulom, Rémi; Lin, Shun-Shii
Simulation balancing is a new technique to tune parameters of a playout policy for a Monte-Carlo game-playing program. So far, this algorithm had only been tested in a very artificial setting: it was limited to 5×5 and 6×6 Go, and required a stronger external program that served as a supervisor. In this paper, the effectiveness of simulation balancing is demonstrated in a more realistic setting. A state-of-the-art program, Erica, learned an improved playout policy on the 9×9 board, without requiring any external expert to provide position evaluations. The evaluations were collected by letting the program analyze positions by itself. The previous version of Erica learned pattern weights with the minorization-maximization algorithm. Thanks to simulation balancing, its playing strength was improved from a winning rate of 69% to 78% against Fuego 0.4.
Archimedes, the Free Monte Carlo simulator
Sellier, Jean Michel D
2012-01-01
Archimedes is the GNU package for Monte Carlo simulations of electron transport in semiconductor devices. The first release appeared in 2004 and since then it has been improved with many new features like quantum corrections, magnetic fields, new materials, GUI, etc. This document represents the first attempt to have a complete manual. Many of the Physics models implemented are described and a detailed description is presented to make the user able to write his/her own input deck. Please, feel free to contact the author if you want to contribute to the project.
Monte Carlo simulations of dense quantum plasmas
Energy Technology Data Exchange (ETDEWEB)
Filinov, V S [Institute for High Energy Density, Izhorskay 13/19, Moscow 125412 (Russian Federation); Bonitz, M [Universitaet Kiel, Leibnizstrasse 15, 24098 Kiel (Germany); Fortov, V E [Institute for High Energy Density, Izhorskay 13/19, Moscow 125412 (Russian Federation); Ebeling, W [Humbold Universitaet Berlin, Invalidenstrasse 110, D-10115 Berlin (Germany); Fehske, H [Universitaet Greifswald, Domstrasse 10a, D-17487, Greifswald (Germany); Kremp, D [Universitaet Rostock, Universitaetsplatz 3, D-18051 Rostock (Germany); Kraeft, W D [Universitaet Greifswald, Domstrasse 10a, D-17487, Greifswald (Germany); Bezkrovniy, V [Universitaet Rostock, Universitaetsplatz 3, D-18051 Rostock (Germany); Levashov, P [Institute for High Energy Density, Izhorskay 13/19, Moscow 125412 (Russian Federation)
2006-04-28
Thermodynamic properties of the equilibrium strongly coupled quantum plasmas investigated by direct path integral Monte Carlo (DPIMC) simulations within a wide region of density, temperature and positive to negative particle mass ratio. Pair distribution functions (PDF), equation of state (EOS), internal energy and Hugoniot are compared with available theoretical and experimental results. Possibilities of the phase transition in hydrogen and electron-hole plasma from neutral particle system to metallic-like state and crystal-like structures, including antiferromagnetic hole structure in semiconductors at low temperatures, are discussed.
Non-analog Monte Carlo estimators for radiation momentum deposition
Energy Technology Data Exchange (ETDEWEB)
Densmore, Jeffery D [Los Alamos National Laboratory; Hykes, Joshua M [Los Alamos National Laboratory
2008-01-01
The standard method for calculating radiation momentum deposition in Monte Carlo simulations is the analog estimator, which tallies the change in a particle's momentum at each interaction with the matter. Unfortunately, the analog estimator can suffer from large amounts of statistical error. In this paper, we present three new non-analog techniques for estimating momentum deposition. Specifically, we use absorption, collision, and track-length estimators to evaluate a simple integral expression for momentum deposition that does not contain terms that can cause large amounts of statistical error in the analog scheme. We compare our new non-analog estimators to the analog estimator with a set of test problems that encompass a wide range of material properties and both isotropic and anisotropic scattering. In nearly all cases, the new non-analog estimators outperform the analog estimator. The track-length estimator consistently yields the highest performance gains, improving upon the analog-estimator figure of merit by factors of up to two orders of magnitude.
Diagnosing Undersampling in Monte Carlo Eigenvalue and Flux Tally Estimates
Energy Technology Data Exchange (ETDEWEB)
Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL
2015-01-01
This study explored the impact of undersampling on the accuracy of tally estimates in Monte Carlo (MC) calculations. Steady-state MC simulations were performed for models of several critical systems with varying degrees of spatial and isotopic complexity, and the impact of undersampling on eigenvalue and fuel pin flux/fission estimates was examined. This study observed biases in MC eigenvalue estimates as large as several percent and biases in fuel pin flux/fission tally estimates that exceeded tens, and in some cases hundreds, of percent. This study also investigated five statistical metrics for predicting the occurrence of undersampling biases in MC simulations. Three of the metrics (the Heidelberger-Welch RHW, the Geweke Z-Score, and the Gelman-Rubin diagnostics) are commonly used for diagnosing the convergence of Markov chains, and two of the methods (the Contributing Particles per Generation and Tally Entropy) are new convergence metrics developed in the course of this study. These metrics were implemented in the KENO MC code within the SCALE code system and were evaluated for their reliability at predicting the onset and magnitude of undersampling biases in MC eigenvalue and flux tally estimates in two of the critical models. Of the five methods investigated, the Heidelberger-Welch RHW, the Gelman-Rubin diagnostics, and Tally Entropy produced test metrics that correlated strongly to the size of the observed undersampling biases, indicating their potential to effectively predict the size and prevalence of undersampling biases in MC simulations.
Modeling neutron guides using Monte Carlo simulations
Wang, D Q; Crow, M L; Wang, X L; Lee, W T; Hubbard, C R
2002-01-01
Four neutron guide geometries, straight, converging, diverging and curved, were characterized using Monte Carlo ray-tracing simulations. The main areas of interest are the transmission of the guides at various neutron energies and the intrinsic time-of-flight (TOF) peak broadening. Use of a delta-function time pulse from a uniform Lambert neutron source allows one to quantitatively simulate the effect of guides' geometry on the TOF peak broadening. With a converging guide, the intensity and the beam divergence increases while the TOF peak width decreases compared with that of a straight guide. By contrast, use of a diverging guide decreases the intensity and the beam divergence, and broadens the width (in TOF) of the transmitted neutron pulse.
Application of MINERVA Monte Carlo simulations to targeted radionuclide therapy.
Descalle, Marie-Anne; Hartmann Siantar, Christine L; Dauffy, Lucile; Nigg, David W; Wemple, Charles A; Yuan, Aina; DeNardo, Gerald L
2003-02-01
Recent clinical results have demonstrated the promise of targeted radionuclide therapy for advanced cancer. As the success of this emerging form of radiation therapy grows, accurate treatment planning and radiation dose simulations are likely to become increasingly important. To address this need, we have initiated the development of a new, Monte Carlo transport-based treatment planning system for molecular targeted radiation therapy as part of the MINERVA system. The goal of the MINERVA dose calculation system is to provide 3-D Monte Carlo simulation-based dosimetry for radiation therapy, focusing on experimental and emerging applications. For molecular targeted radionuclide therapy applications, MINERVA calculates patient-specific radiation dose estimates using computed tomography to describe the patient anatomy, combined with a user-defined 3-D radiation source. This paper describes the validation of the 3-D Monte Carlo transport methods to be used in MINERVA for molecular targeted radionuclide dosimetry. It reports comparisons of MINERVA dose simulations with published absorbed fraction data for distributed, monoenergetic photon and electron sources, and for radioisotope photon emission. MINERVA simulations are generally within 2% of EGS4 results and 10% of MCNP results, but differ by up to 40% from the recommendations given in MIRD Pamphlets 3 and 8 for identical medium composition and density. For several representative source and target organs in the abdomen and thorax, specific absorbed fractions calculated with the MINERVA system are generally within 5% of those published in the revised MIRD Pamphlet 5 for 100 keV photons. However, results differ by up to 23% for the adrenal glands, the smallest of our target organs. Finally, we show examples of Monte Carlo simulations in a patient-like geometry for a source of uniform activity located in the kidney.
Parallel Monte Carlo simulation of aerosol dynamics
Zhou, K.
2014-01-01
A highly efficient Monte Carlo (MC) algorithm is developed for the numerical simulation of aerosol dynamics, that is, nucleation, surface growth, and coagulation. Nucleation and surface growth are handled with deterministic means, while coagulation is simulated with a stochastic method (Marcus-Lushnikov stochastic process). Operator splitting techniques are used to synthesize the deterministic and stochastic parts in the algorithm. The algorithm is parallelized using the Message Passing Interface (MPI). The parallel computing efficiency is investigated through numerical examples. Near 60% parallel efficiency is achieved for the maximum testing case with 3.7 million MC particles running on 93 parallel computing nodes. The algorithm is verified through simulating various testing cases and comparing the simulation results with available analytical and/or other numerical solutions. Generally, it is found that only small number (hundreds or thousands) of MC particles is necessary to accurately predict the aerosol particle number density, volume fraction, and so forth, that is, low order moments of the Particle Size Distribution (PSD) function. Accurately predicting the high order moments of the PSD needs to dramatically increase the number of MC particles. 2014 Kun Zhou et al.
Pattern Recognition for a Flight Dynamics Monte Carlo Simulation
Restrepo, Carolina; Hurtado, John E.
2011-01-01
The design, analysis, and verification and validation of a spacecraft relies heavily on Monte Carlo simulations. Modern computational techniques are able to generate large amounts of Monte Carlo data but flight dynamics engineers lack the time and resources to analyze it all. The growing amounts of data combined with the diminished available time of engineers motivates the need to automate the analysis process. Pattern recognition algorithms are an innovative way of analyzing flight dynamics data efficiently. They can search large data sets for specific patterns and highlight critical variables so analysts can focus their analysis efforts. This work combines a few tractable pattern recognition algorithms with basic flight dynamics concepts to build a practical analysis tool for Monte Carlo simulations. Current results show that this tool can quickly and automatically identify individual design parameters, and most importantly, specific combinations of parameters that should be avoided in order to prevent specific system failures. The current version uses a kernel density estimation algorithm and a sequential feature selection algorithm combined with a k-nearest neighbor classifier to find and rank important design parameters. This provides an increased level of confidence in the analysis and saves a significant amount of time.
QUANTUM MONTE-CARLO SIMULATIONS - ALGORITHMS, LIMITATIONS AND APPLICATIONS
DERAEDT, H
A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown
Quantum Monte Carlo Simulations : Algorithms, Limitations and Applications
Raedt, H. De
1992-01-01
A survey is given of Quantum Monte Carlo methods currently used to simulate quantum lattice models. The formalisms employed to construct the simulation algorithms are sketched. The origin of fundamental (minus sign) problems which limit the applicability of the Quantum Monte Carlo approach is shown
Rare event simulation using Monte Carlo methods
Rubino, Gerardo
2009-01-01
In a probabilistic model, a rare event is an event with a very small probability of occurrence. The forecasting of rare events is a formidable task but is important in many areas. For instance a catastrophic failure in a transport system or in a nuclear power plant, the failure of an information processing system in a bank, or in the communication network of a group of banks, leading to financial losses. Being able to evaluate the probability of rare events is therefore a critical issue. Monte Carlo Methods, the simulation of corresponding models, are used to analyze rare events. This book sets out to present the mathematical tools available for the efficient simulation of rare events. Importance sampling and splitting are presented along with an exposition of how to apply these tools to a variety of fields ranging from performance and dependability evaluation of complex systems, typically in computer science or in telecommunications, to chemical reaction analysis in biology or particle transport in physics. ...
Energy Technology Data Exchange (ETDEWEB)
Richet, Y
2006-12-15
Criticality Monte Carlo calculations aim at estimating the effective multiplication factor (k-effective) for a fissile system through iterations simulating neutrons propagation (making a Markov chain). Arbitrary initialization of the neutron population can deeply bias the k-effective estimation, defined as the mean of the k-effective computed at each iteration. A simplified model of this cycle k-effective sequence is built, based on characteristics of industrial criticality Monte Carlo calculations. Statistical tests, inspired by Brownian bridge properties, are designed to discriminate stationarity of the cycle k-effective sequence. The initial detected transient is, then, suppressed in order to improve the estimation of the system k-effective. The different versions of this methodology are detailed and compared, firstly on a plan of numerical tests fitted on criticality Monte Carlo calculations, and, secondly on real criticality calculations. Eventually, the best methodologies observed in these tests are selected and allow to improve industrial Monte Carlo criticality calculations. (author)
Optical coherence tomography: Monte Carlo simulation and improvement by optical amplification
DEFF Research Database (Denmark)
Tycho, Andreas
2002-01-01
distribution of the light from the sample and the reference beam. To adequately estimate the intensity distributions, a novel method of modeling a focused Gaussian beam using Monte Carlo simulation is developed. This method is then combined with the derived expression for the OCT signal into a new Monte Carlo......An advanced novel Monte Carlo simulation model of the detection process of an optical coherence tomography (OCT) system is presented. For the first time it is shown analytically that the applicability of the incoherent Monte Carlo approach to model the heterodyne detection process of an OCT system...... flexibility of Monte Carlo simulations, this new model is demonstrated to be excellent as a numerical phantom, i.e., as a substitute for otherwise difficult experiments. Finally, a new model of the signal-to-noise ratio (SNR) of an OCT system with optical amplification of the light reflected from the sample...
Monte Carlo simulations for heavy ion dosimetry
Energy Technology Data Exchange (ETDEWEB)
Geithner, O.
2006-07-26
Water-to-air stopping power ratio (s{sub w,air}) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variables. The lowest particle transport specific energy was decreased from 1 MeV/u down to 10 keV/u by modifying the Bethe- Bloch formula, thus widening its range for medical dosimetry applications. Optional MSTAR and ICRU-73 stopping power data were included. The fragmentation model was verified using all available experimental data and some parameters were adjusted. The present code version shows excellent agreement with experimental data. Additional to the calculations of stopping power ratios, s{sub w,air}, the influence of fragments and I-values on s{sub w,air} for carbon ion beams was investigated. The value of s{sub w,air} deviates as much as 2.3% at the Bragg peak from the recommended by TRS-398 constant value of 1.130 for an energy of 50 MeV/u. (orig.)
Uncertainty of NURBS surface fit by Monte Carlo simulations
Koch, Karl-Rudolf
2009-12-01
A free-form surface expressed by NURBS (nonuniform rational B-splines) is fitted to the measured coordinates of points by the lofting method. The unknown control points of the free-form surface are therefore not simultaneously estimated but determined by cross-sectional curve fits. This uses much less computer time than the simultaneous estimation and gives identical results. The free-form surface should be determined with an uncertainty which does not considerably surpass the uncertainty of the measurements. This is investigated here for the example of a free-form surface for a pothole in a road determined by the measurements of a laserscanner. The uncertainties are expressed by standard deviations and confidence intervals. They are computed using Monte Carlo simulations for the positioning of a point by the measured coordinates and by fitting a free-form surface. The resulting uncertainties agree. In addition, the uncertainties of quantities characterizing the shape and the slope of the surface are determined by Monte Carlo simulations. It turns out that the uncertainties resulting from the measurements and from the free-form surface fit are approximately identical.
MONTE CARLO SIMULATION OF CHARGED PARTICLE IN AN ELECTRONEGATIVE PLASMA
Directory of Open Access Journals (Sweden)
L SETTAOUTI
2003-12-01
Full Text Available Interest in radio frequency (rf discharges has grown tremendously in recent years due to their importance in microelectronic technologies. Especially interesting are the properties of discharges in electronegative gases which are most frequently used for technological applications. Monte Carlo simulation have become increasingly important as a simulation tool particularly in the area of plasma physics. In this work, we present some detailed properties of rf plasmas obtained by Monte Carlo simulation code, in SF6
Modern analysis of ion channeling data by Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Nowicki, Lech [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland)]. E-mail: lech.nowicki@fuw.edu.pl; Turos, Andrzej [Institute of Electronic Materials Technology, Wolczynska 133, 01-919 Warsaw (Poland); Ratajczak, Renata [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland); Stonert, Anna [Andrzej SoItan Institute for Nuclear Studies, ul. Hoza 69, 00-681 Warsaw (Poland); Garrido, Frederico [Centre de Spectrometrie Nucleaire et Spectrometrie de Masse, CNRS-IN2P3-Universite Paris-Sud, 91405 Orsay (France)
2005-10-15
Basic scheme of ion channeling spectra Monte Carlo simulation is reformulated in terms of statistical sampling. The McChasy simulation code is described and two examples of the code applications are presented. These are: calculation of projectile flux in uranium dioxide crystal and defect analysis for ion implanted InGaAsP/InP superlattice. Virtues and pitfalls of defect analysis using Monte Carlo simulations are discussed.
Monte Carlo simulation in SPECT: a comparison of two approaches
Sled, John G.; Celler, Anna; Barney, J. Scott; Ivanovic, Marija
1994-05-01
Monte Carlo methods play an important role in medical imaging research. Direct analog Monte Carlo simulations can be very accurate but require considerable computational resources. Variance reduction techniques may offer a solution to this problem. In this paper we present a comparison of expected values of standard quantities of interest for SPECT using these two simulation methods. The effect of variance reduction on the statistical characteristics of the simulated data is also investigated.
CORPORATE VALUATION USING TWO-DIMENSIONAL MONTE CARLO SIMULATION
Directory of Open Access Journals (Sweden)
Toth Reka
2010-12-01
Full Text Available In this paper, we have presented a corporate valuation model. The model combine several valuation methods in order to get more accurate results. To determine the corporate asset value we have used the Gordon-like two-stage asset valuation model based on the calculation of the free cash flow to the firm. We have used the free cash flow to the firm to determine the corporate market value, which was calculated with use of the Black-Scholes option pricing model in frame of the two-dimensional Monte Carlo simulation method. The combined model and the use of the two-dimensional simulation model provides a better opportunity for the corporate value estimation.
Closed-shell variational quantum Monte Carlo simulation for the ...
African Journals Online (AJOL)
Closed-shell variational quantum Monte Carlo simulation for the electric dipole moment calculation of hydrazine molecule using casino-code. ... Nigeria Journal of Pure and Applied Physics ... The variational quantum Monte Carlo (VQMC) technique used in this work employed the restricted Hartree-Fock (RHF) scheme.
The application of Bayesian interpolation in Monte Carlo simulations
Rajabali Nejad, Mohammadreza; van Gelder, P.H.A.J.M.; van Erp, N.; Martorell, Sebastian; Soares, C. Guedes; Barnett, Julie
2009-01-01
To reduce the cost of Monte Carlo (MC) simulations for time-consuming processes (like Finite Elements), a Bayesian interpolation method is coupled with the Monte Carlo technique. It is, therefore, possible to reduce the number of realizations in MC by interpolation. Besides, there is a possibility
Forest canopy BRDF simulation using Monte Carlo method
Huang, J.; Wu, B.; Zeng, Y.; Tian, Y.
2006-01-01
Monte Carlo method is a random statistic method, which has been widely used to simulate the Bidirectional Reflectance Distribution Function (BRDF) of vegetation canopy in the field of visible remote sensing. The random process between photons and forest canopy was designed using Monte Carlo method.
Crop canopy BRDF simulation and analysis using Monte Carlo method
Huang, J.; Wu, B.; Tian, Y.; Zeng, Y.
2006-01-01
This author designs the random process between photons and crop canopy. A Monte Carlo model has been developed to simulate the Bi-directional Reflectance Distribution Function (BRDF) of crop canopy. Comparing Monte Carlo model to MCRM model, this paper analyzes the variations of different LAD and
Monte Carlo simulations for plasma physics
Energy Technology Data Exchange (ETDEWEB)
Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X. [National Inst. for Fusion Science, Toki, Gifu (Japan)
2000-07-01
Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)
Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...
Hybrid Multilevel Monte Carlo Simulation of Stochastic Reaction Networks
Moraes, Alvaro
2015-01-07
Stochastic reaction networks (SRNs) is a class of continuous-time Markov chains intended to describe, from the kinetic point of view, the time-evolution of chemical systems in which molecules of different chemical species undergo a finite set of reaction channels. This talk is based on articles [4, 5, 6], where we are interested in the following problem: given a SRN, X, defined though its set of reaction channels, and its initial state, x0, estimate E (g(X(T))); that is, the expected value of a scalar observable, g, of the process, X, at a fixed time, T. This problem lead us to define a series of Monte Carlo estimators, M, such that, with high probability can produce values close to the quantity of interest, E (g(X(T))). More specifically, given a user-selected tolerance, TOL, and a small confidence level, η, find an estimator, M, based on approximate sampled paths of X, such that, P (|E (g(X(T))) − M| ≤ TOL) ≥ 1 − η; even more, we want to achieve this objective with near optimal computational work. We first introduce a hybrid path-simulation scheme based on the well-known stochastic simulation algorithm (SSA)[3] and the tau-leap method [2]. Then, we introduce a Multilevel Monte Carlo strategy that allows us to achieve a computational complexity of order O(T OL−2), this is the same computational complexity as in an exact method but with a smaller constant. We provide numerical examples to show our results.
Monte Carlo molecular simulation of phase-coexistence for oil production and processing
Li, Jun
2011-01-01
The Gibbs-NVT ensemble Monte Carlo method is used to simulate the liquid-vapor coexistence diagram and the simulation results of methane agree well with the experimental data in a wide range of temperatures. For systems with two components, the Gibbs-NPT ensemble Monte Carlo method is employed in the simulation while the mole fraction of each component in each phase is modeled as a Leonard-Jones fluid. As the results of Monte Carlo simulations usually contain huge statistical error, the blocking method is used to estimate the variance of the simulation results. Additionally, in order to improve the simulation efficiency, the step sizes of different trial moves is adjusted automatically so that their acceptance probabilities can approach to the preset values.
Jiang, Xue; Na, Jin; Lu, Wenxi; Zhang, Yu
2017-11-01
Simulation-optimization techniques are effective in identifying an optimal remediation strategy. Simulation models with uncertainty, primarily in the form of parameter uncertainty with different degrees of correlation, influence the reliability of the optimal remediation strategy. In this study, a coupled Monte Carlo simulation and Copula theory is proposed for uncertainty analysis of a simulation model when parameters are correlated. Using the self-adaptive weight particle swarm optimization Kriging method, a surrogate model was constructed to replace the simulation model and reduce the computational burden and time consumption resulting from repeated and multiple Monte Carlo simulations. The Akaike information criterion (AIC) and the Bayesian information criterion (BIC) were employed to identify whether the t Copula function or the Gaussian Copula is the optimal Copula function to match the relevant structure of the parameters. The results show that both the AIC and BIC values of the t Copula function are less than those of the Gaussian Copula function. This indicates that the t Copula function is the optimal function for matching the relevant structure of the parameters. The outputs of the simulation model when parameter correlation was considered and when it was ignored were compared. The results show that the amplitude of the fluctuation interval when parameter correlation was considered is less than the corresponding amplitude when parameter estimation was ignored. Moreover, it was demonstrated that considering the correlation among parameters is essential for uncertainty analysis of a simulation model, and the results of uncertainty analysis should be incorporated into the remediation strategy optimization process.
Monte Carlo Approach for Reliability Estimations in Generalizability Studies.
Dimitrov, Dimiter M.
A Monte Carlo approach is proposed, using the Statistical Analysis System (SAS) programming language, for estimating reliability coefficients in generalizability theory studies. Test scores are generated by a probabilistic model that considers the probability for a person with a given ability score to answer an item with a given difficulty…
Monte Carlo simulation in statistical physics an introduction
Binder, Kurt
1992-01-01
The Monte Carlo method is a computer simulation method which uses random numbers to simulate statistical fluctuations The method is used to model complex systems with many degrees of freedom Probability distributions for these systems are generated numerically and the method then yields numerically exact information on the models Such simulations may be used tosee how well a model system approximates a real one or to see how valid the assumptions are in an analyical theory A short and systematic theoretical introduction to the method forms the first part of this book The second part is a practical guide with plenty of examples and exercises for the student Problems treated by simple sampling (random and self-avoiding walks, percolation clusters, etc) are included, along with such topics as finite-size effects and guidelines for the analysis of Monte Carlo simulations The two parts together provide an excellent introduction to the theory and practice of Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Yakoumakis, E N; Gialousis, G I; Papadopoulou, Despina; Makri, Triantafillia; Pappouli, Zografia; Yakoumakis, Nikolaos; Papagiannis, Panayotis; Georgiou, Evangelos [Medical Physics Department, University of Athens, 75 Mikras Asias Street, Athens 11527 (Greece)], E-mail: gialousis@med.uoa.gr
2009-06-15
Entrance surface radiation doses were measured with thermoluminescent dosimeters for 98 children who were referred to a cardiology department for the diagnosis or the treatment of a congenital heart disease. Additionally, all the radiographic parameters were recorded and Monte Carlo simulations were performed for the estimation of entrance surface dose to effective dose conversion factors, in order to further calculate the effective dose for each child. For diagnostic catheterisations the values ranged from 0.16 to 14.44 mSv, with average 3.71 mSv, and for therapeutic catheterisations the values ranged from 0.38 to 25.01 mSv, with average value 5 mSv. Effective doses were estimated for diagnostic procedures and interventional procedures performed for the treatment of five different heart diseases: (a) atrial septal defect (ASD), (b) ventricular septal defect (VSD), (c) patent ductus arteriosus (PDA), (d) aorta coarctation and (e) pulmonary stenosis. The high levels of radiation exposure are, however, balanced with the advantages of cardiac catheterisations such as the avoidance of surgical closure and the necessity of shorter or even no hospitalisation.
ESTIMATING RESIDUAL HEDGING RISK WITH LEAST-SQUARES MONTE CARLO
STEFAN ANKIRCHNER; CHRISTIAN PIGORSCH; NIKOLAUS SCHWEIZER
2014-01-01
Frequently, dynamic hedging strategies minimizing risk exposure are not given in closed form, but need to be approximated numerically. This makes it difficult to estimate residual hedging risk, also called basis risk, when only imperfect hedging instruments are at hand. We propose an easy to implement and computationally efficient least-squares Monte Carlo algorithm to estimate residual hedging risk. The algorithm approximates the variance minimal hedging strategy within general diffusion mod...
Energy Technology Data Exchange (ETDEWEB)
Bonzoumet, S.P.J.; Braz, D.; Lopes, R.T. [Coordenacao dos Programas de Pos-Graduacao de Engenharia (LIN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear. Lab. de Instrumentacao Nuclear; Anjos, M.J. [Coordenacao dos Programas de Pos-Graduacao de Engenharia (LIN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear. Lab. de Instrumentacao Nuclear; Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil). Inst. de Fisica; Padilha, Lucas, E-mail: sielso@lin.ufrj.br [Universidade Federal do Rio de Janeiro (HUCFF/UFRJ), RJ (Brazil). Hospital Universitario Clementino Fraga Filho
2005-07-01
In this work we used the EGS4 code in a simulated study of dose percentage in intraoral examination to 10 energy range to 140 keV. The simulation was carried out on a model consisting of different geometry (cheek, tooth and mouth cavity) under normal incidence X-ray beam over the surface of the various simulated materials. It was observed that for energy smaller than 30 keV most of the energy is deposited on the cheek. In 30 keV there is a point of maximum radiation absorption in the tooth (approximately 60% of the energy of the incident radiation is deposited on the tooth) in relation to other simulated materials. It means that in this energy there is a better contrast in the radiographic image of the tooth and a smaller dose on the cheek. In 40 keV the deposited energy in the tooth is roughly equal to the energy that is transmitted (to the radiographic film or buccal cavity) causing a degradation in the radiographic image and/or a higher dose in the oral cavity. For energies above 40 keV, the amount of energy transmitted (to the oral cavity and/or radiographic film) is higher than the energy deposited in other materials, i.e, it only contributes to increasing of dose in the regions close to the oral cavity and the radiographic image degradation. These results can provide important information for radiological procedures applied in dentistry where the image quality is a relevant factor to a dental evaluation needs as well as reducing dose in the oral cavity.
Numerical integration of detector response functions via Monte Carlo simulations
Kelly, K. J.; O'Donnell, J. M.; Gomez, J. A.; Taddeucci, T. N.; Devlin, M.; Haight, R. C.; White, M. C.; Mosby, S. M.; Neudecker, D.; Buckner, M. Q.; Wu, C. Y.; Lee, H. Y.
2017-09-01
Calculations of detector response functions are complicated because they include the intricacies of signal creation from the detector itself as well as a complex interplay between the detector, the particle-emitting target, and the entire experimental environment. As such, these functions are typically only accessible through time-consuming Monte Carlo simulations. Furthermore, the output of thousands of Monte Carlo simulations can be necessary in order to extract a physics result from a single experiment. Here we describe a method to obtain a full description of the detector response function using Monte Carlo simulations. We also show that a response function calculated in this way can be used to create Monte Carlo simulation output spectra a factor of ∼ 1000 × faster than running a new Monte Carlo simulation. A detailed discussion of the proper treatment of uncertainties when using this and other similar methods is provided as well. This method is demonstrated and tested using simulated data from the Chi-Nu experiment, which measures prompt fission neutron spectra at the Los Alamos Neutron Science Center.
Andrew D. Richardson; David Y. Hollinger; David Y. Hollinger
2005-01-01
Whether the goal is to fill gaps in the flux record, or to extract physiological parameters from eddy covariance data, researchers are frequently interested in fitting simple models of ecosystem physiology to measured data. Presently, there is no consensus on the best models to use, or the ideal optimization criteria. We demonstrate that, given our estimates of the...
Monte-Carlo simulation-based statistical modeling
Chen, John
2017-01-01
This book brings together expert researchers engaged in Monte-Carlo simulation-based statistical modeling, offering them a forum to present and discuss recent issues in methodological development as well as public health applications. It is divided into three parts, with the first providing an overview of Monte-Carlo techniques, the second focusing on missing data Monte-Carlo methods, and the third addressing Bayesian and general statistical modeling using Monte-Carlo simulations. The data and computer programs used here will also be made publicly available, allowing readers to replicate the model development and data analysis presented in each chapter, and to readily apply them in their own research. Featuring highly topical content, the book has the potential to impact model development and data analyses across a wide spectrum of fields, and to spark further research in this direction.
Composite system reliability evaluation using sequential Monte Carlo simulation
Jonnavithula, Annapoorani
Monte Carlo simulation methods can be effectively used to assess the adequacy of composite power system networks. The sequential simulation approach is the most fundamental technique available and can be used to provide a wide range of indices. It can also be used to provide estimates which can serve as benchmarks against which other approximate techniques can be compared. The focus of this research work is on the reliability evaluation of composite generation and transmission systems with special reference to frequency and duration related indices and estimated power interruption costs at each load bus. One of the main objectives is to use the sequential simulation method to create a comprehensive technique for composite system adequacy evaluation. This thesis recognizes the need for an accurate representation of the load model at the load buses which depends on the mix of customer sectors at each bus. Chronological hourly load curves are developed in this thesis, recognizing the individual load profiles of the customers at each load bus. Reliability worth considerations are playing an ever increasing role in power system planning and operation. Different methods for bus outage cost evaluation are proposed in this thesis. It may not be computationally feasible to use the sequential simulation method with time varying loads at each bus in large electric power system networks. Time varying load data may also not be available at each bus. This research work uses the sequential methodology as a fundamental technique to calibrate other non sequential methods such as the state sampling and state transition sampling techniques. Variance reduction techniques that improve the efficiency of the sequential simulation procedure are investigated as a part of this research work. Pertinent features that influence reliability worth assessment are also incorporated. All the proposed methods in this thesis are illustrated by application to two reliability test systems. In addition
Stabilization effect of fission source in coupled Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Olsen, Borge; Dufek, Jan [Div. of Nuclear Reactor Technology, KTH Royal Institute of Technology, AlbaNova University Center, Stockholm (Sweden)
2017-08-15
A fission source can act as a stabilization element in coupled Monte Carlo simulations. We have observed this while studying numerical instabilities in nonlinear steady-state simulations performed by a Monte Carlo criticality solver that is coupled to a xenon feedback solver via fixed-point iteration. While fixed-point iteration is known to be numerically unstable for some problems, resulting in large spatial oscillations of the neutron flux distribution, we show that it is possible to stabilize it by reducing the number of Monte Carlo criticality cycles simulated within each iteration step. While global convergence is ensured, development of any possible numerical instability is prevented by not allowing the fission source to converge fully within a single iteration step, which is achieved by setting a small number of criticality cycles per iteration step. Moreover, under these conditions, the fission source may converge even faster than in criticality calculations with no feedback, as we demonstrate in our numerical test simulations.
Duan, Lian; Makita, Shuichi; Yamanari, Masahiro; Lim, Yiheng; Yasuno, Yoshiaki
2011-08-01
A Monte-Carlo-based phase retardation estimator is developed to correct the systematic error in phase retardation measurement by polarization sensitive optical coherence tomography (PS-OCT). Recent research has revealed that the phase retardation measured by PS-OCT has a distribution that is neither symmetric nor centered at the true value. Hence, a standard mean estimator gives us erroneous estimations of phase retardation, and it degrades the performance of PS-OCT for quantitative assessment. In this paper, the noise property in phase retardation is investigated in detail by Monte-Carlo simulation and experiments. A distribution transform function is designed to eliminate the systematic error by using the result of the Monte-Carlo simulation. This distribution transformation is followed by a mean estimator. This process provides a significantly better estimation of phase retardation than a standard mean estimator. This method is validated both by numerical simulations and experiments. The application of this method to in vitro and in vivo biological samples is also demonstrated.
Shielding evaluation of neutron generator hall by Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Pujala, U.; Selvakumaran, T.S.; Baskaran, R.; Venkatraman, B. [Radiological Safety Division, Indira Gandhi Center for Atomic Research, Kalpakkam (India); Thilagam, L.; Mohapatra, D.K., E-mail: swathythila2@yahoo.com [Safety Research Institute, Atomic Energy Regulatory Board, Kalpakkam (India)
2017-04-01
A shielded hall was constructed for accommodating a D-D, D-T or D-Be based pulsed neutron generator (NG) with 4π yield of 10{sup 9} n/s. The neutron shield design of the facility was optimized using NCRP-51 methodology such that the total dose rates outside the hall areas are well below the regulatory limit for full occupancy criterion (1 μSv/h). However, the total dose rates at roof top, cooling room trench exit and labyrinth exit were found to be above this limit for the optimized design. Hence, additional neutron shielding arrangements were proposed for cooling room trench and labyrinth exits. The roof top was made inaccessible. The present study is an attempt to evaluate the neutron and associated capture gamma transport through the bulk shields for the complete geometry and materials of the NG-Hall using Monte Carlo (MC) codes MCNP and FLUKA. The neutron source terms of D-D, D-T and D-Be reactions are considered in the simulations. The effect of additional shielding proposed has been demonstrated through the simulations carried out with the consideration of the additional shielding for D-Be neutron source term. The results MC simulations using two different codes are found to be consistent with each other for neutron dose rate estimates. However, deviation up to 28% is noted between these two codes at few locations for capture gamma dose rate estimates. Overall, the dose rates estimated by MC simulations including additional shields shows that all the locations surrounding the hall satisfy the full occupancy criteria for all three types of sources. Additionally, the dose rates due to direct transmission of primary neutrons estimated by FLUKA are compared with the values calculated using the formula given in NCRP-51 which shows deviations up to 50% with each other. The details of MC simulations and NCRP-51 methodology for the estimation of primary neutron dose rate along with the results are presented in this paper. (author)
Direct Monte Carlo simulation of nanoscale mixed gas bearings
Directory of Open Access Journals (Sweden)
Kyaw Sett Myo
2015-06-01
Full Text Available The conception of sealed hard drives with helium gas mixture has been recently suggested over the current hard drives for achieving higher reliability and less position error. Therefore, it is important to understand the effects of different helium gas mixtures on the slider bearing characteristics in the head–disk interface. In this article, the helium/air and helium/argon gas mixtures are applied as the working fluids and their effects on the bearing characteristics are studied using the direct simulation Monte Carlo method. Based on direct simulation Monte Carlo simulations, the physical properties of these gas mixtures such as mean free path and dynamic viscosity are achieved and compared with those obtained from theoretical models. It is observed that both results are comparable. Using these gas mixture properties, the bearing pressure distributions are calculated under different fractions of helium with conventional molecular gas lubrication models. The outcomes reveal that the molecular gas lubrication results could have relatively good agreement with those of direct simulation Monte Carlo simulations, especially for pure air, helium, or argon gas cases. For gas mixtures, the bearing pressures predicted by molecular gas lubrication model are slightly larger than those from direct simulation Monte Carlo simulation.
Monte Carlo simulation of gas-flow using MCNP
Energy Technology Data Exchange (ETDEWEB)
Matthes, W.K. [21027 Ispra, Via Francia 146 (Italy)]. E-mail: wilhelm_matthes@hotmail.com
2005-09-15
The simulation of the flow of rarefied gases by Monte Carlo has been long established and goes by the name DSMC (Direct Simulation by Monte Carlo). The theory, applications and references are well documented in Monographs on this subject, e.g., Bird [Bird, G.A., 1998. Molecular Gas Dynamics and the Direct Simulation of Gas Flows, Clarendon Press, Oxford], Cercignani [Cercignani, C., 2000. Rarified Gas Dynamics, Cambridge University Press, Cambridge]. However, as most applications are restricted to two-dimensional flows only, we want to demonstrate that the MCNP code (see [Briesmeier, J.F., 1986. MCNP-A General Monte Carlo Code for Neutron and Photon Transport, Version 3A, Los Alamos National Laboratory]), after a few modifications, provides a very flexible tool to investigate the flow (and reactions) of multicomponent gas-mixtures in complicated three-dimensional structures.
Fixed forced detection for fast SPECT Monte-Carlo simulation.
Cajgfinger, Thomas; Rit, Simon; Letang, Jean Michel; Halty, Adrien; Sarrut, David
2017-11-29
Monte-Carlo simulations of SPECT images are notoriously slow to converge due to the large ratio between the number of photons emitted and detected in the collimator. This work proposes a method to accelerate the simulations based on Fixed Forced Detection (FFD) combined with an analytical response of the detector. FFD is based on a Monte-Carlo simulation but forces the detection of a photon in each detector pixel weighted by the probability of emission (or scattering) and transmission to this pixel. The method was evaluated with numerical phantoms and on patient images. We obtained differences with analog Monte-Carlo lower than the statistical uncertainty. The overall computing time gain can reach up to 5 orders of magnitude. Source code and examples are available in the Gate V8.0 release. . © 2017 Institute of Physics and Engineering in Medicine.
Diagrammatic Monte Carlo simulations of staggered fermions at finite coupling
Vairinhos, Helvio
2016-01-01
Diagrammatic Monte Carlo has been a very fruitful tool for taming, and in some cases even solving, the sign problem in several lattice models. We have recently proposed a diagrammatic model for simulating lattice gauge theories with staggered fermions at arbitrary coupling, which extends earlier successful efforts to simulate lattice QCD at finite baryon density in the strong-coupling regime. Here we present the first numerical simulations of our model, using worm algorithms.
Monte Carlo simulations of electron photoemission from cesium antimonide
Gupta, Pranav; Cultrera, Luca; Bazarov, Ivan
2017-06-01
We report on the results from semi-classical Monte Carlo simulations of electron photoemission (photoelectric emission) from cesium antimonide (Cs3Sb) and compare them with experimental results at 90 K and room temperature, with an emphasis on near-threshold photoemission properties. Interfacial effects, impurities, and electron-phonon coupling are central features of our Monte Carlo model. We use these simulations to predict photoemission properties at the ultracold cryogenic temperature of 20 K and to identify critical material parameters that need to be properly measured experimentally for reproducing the electron photoemission properties of Cs3Sb and other materials more accurately.
Monte Carlo simulation of bremsstrahlung produced at SPring-8
Energy Technology Data Exchange (ETDEWEB)
Asano, Yoshihiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1999-03-01
Beam lines of SPring-8 storage ring is now under operation after 1997. An energy range necessary for safety analysis is from some keV synchrotrons radiation to 8 GeV electrons, photons and photoneutrons, some of them have directional distribution. Simulations which includes empirical data are needed in some cases, these are: (1) gas bremsstrahlung radiation produced by the interaction between storage electrons and residual gases, (2) high energy photon behaviors caused by inverse-compton scattering, (3) neutrons produced by photonuclear reaction. A leakage flux caused by ground-shine of synchrotrons radiation is also estimated by simulation. Usually the beam line is set up on the extrapolated position of the straight line of the storage ring. In this case, gas bremsstrahlung from the storage ring is not negligible at the beam line. The Monte Carlo code (EGS4) on electromagnetic cascade interaction is used for estimation. Accuracy of the result is discussed with availability of assumptions including radial distribution. SPring-8 has the beam line in which high energy photons are produced by laser-electron interaction. In this case a photon has an energy of about 3.5 GeV. Local shielding for this gamma radiation is one of the key issues in the beam line design. The EGS4 code is used to simulate the effective shielding structure. The EGS4 code is also used to obtain track length distribution for gas bremsstrahlung photon to impinge the lead target. Safety analysis is made by the MCNP4b code. The wiggler and/or undulator installed in the storage ring produce complicated radiation spectrum. Computer codes (STAC8, ITS3.0, EGS4) are used to analyze photon transport. In this case, attenuation is very large, and time consuming calculation is needed. (Y. Tanaka)
Optimizing Muscle Parameters in Musculoskeletal Modeling Using Monte Carlo Simulations
Hanson, Andrea; Reed, Erik; Cavanagh, Peter
2011-01-01
Astronauts assigned to long-duration missions experience bone and muscle atrophy in the lower limbs. The use of musculoskeletal simulation software has become a useful tool for modeling joint and muscle forces during human activity in reduced gravity as access to direct experimentation is limited. Knowledge of muscle and joint loads can better inform the design of exercise protocols and exercise countermeasure equipment. In this study, the LifeModeler(TM) (San Clemente, CA) biomechanics simulation software was used to model a squat exercise. The initial model using default parameters yielded physiologically reasonable hip-joint forces. However, no activation was predicted in some large muscles such as rectus femoris, which have been shown to be active in 1-g performance of the activity. Parametric testing was conducted using Monte Carlo methods and combinatorial reduction to find a muscle parameter set that more closely matched physiologically observed activation patterns during the squat exercise. Peak hip joint force using the default parameters was 2.96 times body weight (BW) and increased to 3.21 BW in an optimized, feature-selected test case. The rectus femoris was predicted to peak at 60.1% activation following muscle recruitment optimization, compared to 19.2% activation with default parameters. These results indicate the critical role that muscle parameters play in joint force estimation and the need for exploration of the solution space to achieve physiologically realistic muscle activation.
Davidson, Valerie J; Ryks, Joanne
2003-10-01
The objective of food safety risk assessment is to quantify levels of risk for consumers as well as to design improved processing, distribution, and preparation systems that reduce exposure to acceptable limits. Monte Carlo simulation tools have been used to deal with the inherent variability in food systems, but these tools require substantial data for estimates of probability distributions. The objective of this study was to evaluate the use of fuzzy values to represent uncertainty. Fuzzy mathematics and Monte Carlo simulations were compared to analyze the propagation of uncertainty through a number of sequential calculations in two different applications: estimation of biological impacts and economic cost in a general framework and survival of Campylobacter jejuni in a sequence of five poultry processing operations. Estimates of the proportion of a population requiring hospitalization were comparable, but using fuzzy values and interval arithmetic resulted in more conservative estimates of mortality and cost, in terms of the intervals of possible values and mean values, compared to Monte Carlo calculations. In the second application, the two approaches predicted the same reduction in mean concentration (-4 log CFU/ ml of rinse), but the limits of the final concentration distribution were wider for the fuzzy estimate (-3.3 to 5.6 log CFU/ml of rinse) compared to the probability estimate (-2.2 to 4.3 log CFU/ml of rinse). Interval arithmetic with fuzzy values considered all possible combinations in calculations and maximum membership grade for each possible result. Consequently, fuzzy results fully included distributions estimated by Monte Carlo simulations but extended to broader limits. When limited data defines probability distributions for all inputs, fuzzy mathematics is a more conservative approach for risk assessment than Monte Carlo simulations.
Play It Again: Teaching Statistics with Monte Carlo Simulation
Sigal, Matthew J.; Chalmers, R. Philip
2016-01-01
Monte Carlo simulations (MCSs) provide important information about statistical phenomena that would be impossible to assess otherwise. This article introduces MCS methods and their applications to research and statistical pedagogy using a novel software package for the R Project for Statistical Computing constructed to lessen the often steep…
APS undulator and wiggler sources: Monte-Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Xu, S.L.; Lai, B.; Viccaro, P.J.
1992-02-01
Standard insertion devices will be provided to each sector by the Advanced Photon Source. It is important to define the radiation characteristics of these general purpose devices. In this document,results of Monte-Carlo simulation are presented. These results, based on the SHADOW program, include the APS Undulator A (UA), Wiggler A (WA), and Wiggler B (WB).
Monte Carlo simulations of adsorption-induced segregation
DEFF Research Database (Denmark)
Christoffersen, Ebbe; Stoltze, Per; Nørskov, Jens Kehlet
2002-01-01
Through the use of Monte Carlo simulations we study the effect of adsorption-induced segregation. From the bulk composition, degree of dispersion and the partial pressure of the gas phase species we calculate the surface composition of bimetallic alloys. We show that both segregation and adsorption...
Two Dimensional Potential Mapping–Monte Carlo Simulation
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 7. Two Dimensional Potential Mapping – Monte Carlo Simulation. J Meena Devi K Ramachandran. Classroom Volume 10 Issue 7 July 2005 pp 73-84. Fulltext. Click here to view fulltext PDF. Permanent link:
Monte Carlo simulation of quantum statistical lattice models
Raedt, Hans De; Lagendijk, Ad
1985-01-01
In this article we review recent developments in computational methods for quantum statistical lattice problems. We begin by giving the necessary mathematical basis, the generalized Trotter formula, and discuss the computational tools, exact summations and Monte Carlo simulation, that will be used
Directory of Open Access Journals (Sweden)
José Luiz Ferreira Martins
2011-09-01
. From these data was taken at random samples with, respectively, 10, 15 and 20 elements and were performed simulations by Monte Carlo method. Comparing the results of the sample with 160 elements and the data generated by simulation is observed that good results can be obtained by using Monte Carlo method in estimating productivity of industrial welding. On the other hand in Brazilian construction industry the value of productivity average is normally used as a productivity indicator and is based on historical data from other projects collected and measured only after project completion, which is a limitation. This article presents a tool for evaluation of the implementation in real time, enabling adjustments in estimates and monitoring productivity during the project. Similarly, in biddings, budgets and schedule estimations, the use of this tool could enable the adoption of other estimative different from of the average productivity, which is commonly used and as an alternative are suggested three criteria: optimistic, average and pessimistic productivity.
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
Monte Carlo simulations of the SLOWPOKE-2 reactor
Energy Technology Data Exchange (ETDEWEB)
Tan, A.; Buijs, A., E-mail: tana22@mcmaster.ca, E-mail: buijsa@mcmaster.ca [McMaster University, Hamilton, ON (Canada)
2015-07-01
The goal of this project is to study the transient behaviour of the SLOWPOKE-2 reactor using Monte-Carlo simulations. By validating the Monte-Carlo methods in G4-STORK with experimental measurements we hope to extend our understanding of reactor transients as well as further develop our methods to model the transients of the next generation reactor designs. A SLOWPOKE-2 reactor such as the one at RMC is modelled using simulation tools from GEANT4 and data taken from open literature. Simulations in G4-STORK find a neutron flux of order 10{sup 12} n cm{sup -2} s{sup -1} and a control rod worth of (4.9 2.0) mk compared to the experimentally measured worth of 5.45 mk. (author)
Multiparameter estimation along quantum trajectories with sequential Monte Carlo methods
Ralph, Jason F.; Maskell, Simon; Jacobs, Kurt
2017-11-01
This paper proposes an efficient method for the simultaneous estimation of the state of a quantum system and the classical parameters that govern its evolution. This hybrid approach benefits from efficient numerical methods for the integration of stochastic master equations for the quantum system, and efficient parameter estimation methods from classical signal processing. The classical techniques use sequential Monte Carlo (SMC) methods, which aim to optimize the selection of points within the parameter space, conditioned by the measurement data obtained. We illustrate these methods using a specific example, an SMC sampler applied to a nonlinear system, the Duffing oscillator, where the evolution of the quantum state of the oscillator and three Hamiltonian parameters are estimated simultaneously.
Monte Carlo simulation of a prototype photodetector used in radiotherapy
Kausch, C; Albers, D; Schmidt, R; Schreiber, B
2000-01-01
The imaging performance of prototype electronic portal imaging devices (EPID) has been investigated. Monte Carlo simulations have been applied to calculate the modulation transfer function (MTF( f )), the noise power spectrum (NPS( f )) and the detective quantum efficiency (DQE( f )) for different new type of EPIDs, which consist of a detector combination of metal or polyethylene (PE), a phosphor layer of Gd sub 2 O sub 2 S and a flat array of photodiodes. The simulated results agree well with measurements. Based on simulated results, possible optimization of these devices is discussed.
Monte Carlo simulation of the Neutrino-4 experiment
Energy Technology Data Exchange (ETDEWEB)
Serebrov, A. P., E-mail: serebrov@pnpi.spb.ru; Fomin, A. K.; Onegin, M. S.; Ivochkin, V. G.; Matrosov, L. N. [National Research Center Kurchatov Institute, Petersburg Nuclear Physics Institute (Russian Federation)
2015-12-15
Monte Carlo simulation of the two-section reactor antineutrino detector of the Neutrino-4 experiment is carried out. The scintillation-type detector is based on the inverse beta-decay reaction. The antineutrino is recorded by two successive signals from the positron and the neutron. The simulation of the detector sections and the active shielding is performed. As a result of the simulation, the distributions of photomultiplier signals from the positron and the neutron are obtained. The efficiency of the detector depending on the signal recording thresholds is calculated.
Stabilization effect of fission source in coupled Monte Carlo simulations
Directory of Open Access Journals (Sweden)
Börge Olsen
2017-08-01
Full Text Available A fission source can act as a stabilization element in coupled Monte Carlo simulations. We have observed this while studying numerical instabilities in nonlinear steady-state simulations performed by a Monte Carlo criticality solver that is coupled to a xenon feedback solver via fixed-point iteration. While fixed-point iteration is known to be numerically unstable for some problems, resulting in large spatial oscillations of the neutron flux distribution, we show that it is possible to stabilize it by reducing the number of Monte Carlo criticality cycles simulated within each iteration step. While global convergence is ensured, development of any possible numerical instability is prevented by not allowing the fission source to converge fully within a single iteration step, which is achieved by setting a small number of criticality cycles per iteration step. Moreover, under these conditions, the fission source may converge even faster than in criticality calculations with no feedback, as we demonstrate in our numerical test simulations.
Stock Price Simulation Using Bootstrap and Monte Carlo
Directory of Open Access Journals (Sweden)
Pažický Martin
2017-06-01
Full Text Available In this paper, an attempt is made to assessment and comparison of bootstrap experiment and Monte Carlo experiment for stock price simulation. Since the stock price evolution in the future is extremely important for the investors, there is the attempt to find the best method how to determine the future stock price of BNP Paribas′ bank. The aim of the paper is define the value of the European and Asian option on BNP Paribas′ stock at the maturity date. There are employed four different methods for the simulation. First method is bootstrap experiment with homoscedastic error term, second method is blocked bootstrap experiment with heteroscedastic error term, third method is Monte Carlo simulation with heteroscedastic error term and the last method is Monte Carlo simulation with homoscedastic error term. In the last method there is necessary to model the volatility using econometric GARCH model. The main purpose of the paper is to compare the mentioned methods and select the most reliable. The difference between classical European option and exotic Asian option based on the experiment results is the next aim of tis paper.
Multi-pass Monte Carlo simulation method in nuclear transmutations.
Mateescu, Liviu; Kadambi, N Prasad; Ravindra, Nuggehalli M
2016-12-01
Monte Carlo methods, in their direct brute simulation incarnation, bring realistic results if the involved probabilities, be they geometrical or otherwise, remain constant for the duration of the simulation. However, there are physical setups where the evolution of the simulation represents a modification of the simulated system itself. Chief among such evolving simulated systems are the activation/transmutation setups. That is, the simulation starts with a given set of probabilities, which are determined by the geometry of the system, the components and by the microscopic interaction cross-sections. However, the relative weight of the components of the system changes along with the steps of the simulation. A natural measure would be adjusting probabilities after every step of the simulation. On the other hand, the physical system has typically a number of components of the order of Avogadro's number, usually 1025 or 1026 members. A simulation step changes the characteristics for just a few of these members; a probability will therefore shift by a quantity of 1/1025. Such a change cannot be accounted for within a simulation, because then the simulation should have then a number of at least 1028 steps in order to have some significance. This is not feasible, of course. For our computing devices, a simulation of one million steps is comfortable, but a further order of magnitude becomes too big a stretch for the computing resources. We propose here a method of dealing with the changing probabilities, leading to the increasing of the precision. This method is intended as a fast approximating approach, and also as a simple introduction (for the benefit of students) in the very branched subject of Monte Carlo simulations vis-à-vis nuclear reactors. Copyright © 2016 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Morillon, B.
1996-12-31
With most of the traditional and contemporary techniques, it is still impossible to solve the transport equation if one takes into account a fully detailed geometry and if one studies precisely the interactions between particles and matters. Only the Monte Carlo method offers such a possibility. However with significant attenuation, the natural simulation remains inefficient: it becomes necessary to use biasing techniques where the solution of the adjoint transport equation is essential. The Monte Carlo code Tripoli has been using such techniques successfully for a long time with different approximate adjoint solutions: these methods require from the user to find out some parameters. If this parameters are not optimal or nearly optimal, the biases simulations may bring about small figures of merit. This paper presents a description of the most important biasing techniques of the Monte Carlo code Tripoli ; then we show how to calculate the importance function for general geometry with multigroup cases. We present a completely automatic biasing technique where the parameters of the biased simulation are deduced from the solution of the adjoint transport equation calculated by collision probabilities. In this study we shall estimate the importance function through collision probabilities method and we shall evaluate its possibilities thanks to a Monte Carlo calculation. We compare different biased simulations with the importance function calculated by collision probabilities for one-group and multigroup problems. We have run simulations with new biasing method for one-group transport problems with isotropic shocks and for multigroup problems with anisotropic shocks. The results show that for the one-group and homogeneous geometry transport problems the method is quite optimal without splitting and russian roulette technique but for the multigroup and heterogeneous X-Y geometry ones the figures of merit are higher if we add splitting and russian roulette technique.
Application of Monte Carlo simulations to improve basketball shooting strategy
Min, Byeong June
2016-10-01
The underlying physics of basketball shooting seems to be a straightforward example of Newtonian mechanics that can easily be traced by using numerical methods. However, a human basketball player does not make use of all the possible basketball trajectories. Instead, a basketball player will build up a database of successful shots and select the trajectory that has the greatest tolerance to the small variations of the real world. We simulate the basketball player's shooting training as a Monte Carlo sequence to build optimal shooting strategies, such as the launch speed and angle of the basketball, and whether to take a direct shot or a bank shot, as a function of the player's court position and height. The phase-space volume Ω that belongs to the successful launch velocities generated by Monte Carlo simulations is then used as the criterion to optimize a shooting strategy that incorporates not only mechanical, but also human, factors.
Motor simulation via coupled internal models using sequential Monte Carlo
Dindo H; Zambuto D.; Pezzulo G.
2011-01-01
We describe a generative Bayesian model for action understanding in which inverse-forward internal model pairs are considered 'hypotheses' of plausible action goals that are explored in parallel via an approximate inference mechanism based on sequential Monte Carlo methods. The reenactment of internal model pairs can be considered a form of motor simulation, which supports both perceptual prediction and action understanding at the goal level. However, this procedure is generally considered to...
Cassandra: An open source Monte Carlo package for molecular simulation.
Shah, Jindal K; Marin-Rimoldi, Eliseo; Mullen, Ryan Gotchy; Keene, Brian P; Khan, Sandip; Paluch, Andrew S; Rai, Neeraj; Romanielo, Lucienne L; Rosch, Thomas W; Yoo, Brian; Maginn, Edward J
2017-07-15
Cassandra is an open source atomistic Monte Carlo software package that is effective in simulating the thermodynamic properties of fluids and solids. The different features and algorithms used in Cassandra are described, along with implementation details and theoretical underpinnings to various methods used. Benchmark and example calculations are shown, and information on how users can obtain the package and contribute to it are provided. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Monte Carlo simulation of PET images for injection doseoptimization
Czech Academy of Sciences Publication Activity Database
Boldyš, Jiří; Dvořák, Jiří; Skopalová, M.; Bělohlávek, O.
2013-01-01
Roč. 29, č. 9 (2013), s. 988-999 ISSN 2040-7939 R&D Projects: GA MŠk 1M0572 Institutional support: RVO:67985556 Keywords : positron emission tomography * Monte Carlo simulation * biological system modeling * image quality Subject RIV: FD - Oncology ; Hematology Impact factor: 1.542, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/boldys-0397175.pdf
Monte Carlo reliability simulation of coal shearer machine
Hoseinie, Hadi; Khalokakaie, Reza; Ataei, Mohammad A.; Ghodrati, Behzad; Kumar, Uday
2013-01-01
In this paper the Kamat-Riley (K-R) event-based Monte Carlo simulation method was used for reliability analysis of longwall shearer machine. Shearer machine consists of six subsystems; water, haulage, electrical, hydraulic, cutting arms and cable systems in a series network configuration. A shearer in the Tabas coal mine was selected as case study and its all failure data were collected and used for reliability analysis of subsystems. With negligible assumption of time to repair, a flowchart ...
Asteroid mass estimation using Markov-chain Monte Carlo
Siltala, Lauri; Granvik, Mikael
2017-11-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to an inverse problem in at least 13 dimensions where the aim is to derive the mass of the perturbing asteroid(s) and six orbital elements for both the perturbing asteroid(s) and the test asteroid(s) based on astrometric observations. We have developed and implemented three different mass estimation algorithms utilizing asteroid-asteroid perturbations: the very rough 'marching' approximation, in which the asteroids' orbital elements are not fitted, thereby reducing the problem to a one-dimensional estimation of the mass, an implementation of the Nelder-Mead simplex method, and most significantly, a Markov-chain Monte Carlo (MCMC) approach. We describe each of these algorithms with particular focus on the MCMC algorithm, and present example results using both synthetic and real data. Our results agree with the published mass estimates, but suggest that the published uncertainties may be misleading as a consequence of using linearized mass-estimation methods. Finally, we discuss remaining challenges with the algorithms as well as future plans.
Spatial distribution sampling and Monte Carlo simulation of radioactive isotopes
Krainer, Alexander Michael
2015-01-01
This work focuses on the implementation of a program for random sampling of uniformly spatially distributed isotopes for Monte Carlo particle simulations and in specific FLUKA. With FLUKA it is possible to calculate the radio nuclide production in high energy fields. The decay of these nuclide, and therefore the resulting radiation field, however can only be simulated in the same geometry. This works gives the tool to simulate the decay of the produced nuclide in other geometries. With that the radiation field from an irradiated object can be simulated in arbitrary environments. The sampling of isotope mixtures was tested by simulating a 50/50 mixture of $Cs^{137}$ and $Co^{60}$. These isotopes are both well known and provide therefore a first reliable benchmark in that respect. The sampling of uniformly distributed coordinates was tested using the histogram test for various spatial distributions. The advantages and disadvantages of the program compared to standard methods are demonstrated in the real life ca...
Radiation doses in cone-beam breast computed tomography: A Monte Carlo simulation study
Energy Technology Data Exchange (ETDEWEB)
Yi Ying; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Shen Youtao; Liu Xinming; Ge Shuaiping; You Zhicheng; Wang Tianpeng; Shaw, Chris C. [Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)
2011-02-15
Purpose: In this article, we describe a method to estimate the spatial dose variation, average dose and mean glandular dose (MGD) for a real breast using Monte Carlo simulation based on cone beam breast computed tomography (CBBCT) images. We present and discuss the dose estimation results for 19 mastectomy breast specimens, 4 homogeneous breast models, 6 ellipsoidal phantoms, and 6 cylindrical phantoms. Methods: To validate the Monte Carlo method for dose estimation in CBBCT, we compared the Monte Carlo dose estimates with the thermoluminescent dosimeter measurements at various radial positions in two polycarbonate cylinders (11- and 15-cm in diameter). Cone-beam computed tomography (CBCT) images of 19 mastectomy breast specimens, obtained with a bench-top experimental scanner, were segmented and used to construct 19 structured breast models. Monte Carlo simulation of CBBCT with these models was performed and used to estimate the point doses, average doses, and mean glandular doses for unit open air exposure at the iso-center. Mass based glandularity values were computed and used to investigate their effects on the average doses as well as the mean glandular doses. Average doses for 4 homogeneous breast models were estimated and compared to those of the corresponding structured breast models to investigate the effect of tissue structures. Average doses for ellipsoidal and cylindrical digital phantoms of identical diameter and height were also estimated for various glandularity values and compared with those for the structured breast models. Results: The absorbed dose maps for structured breast models show that doses in the glandular tissue were higher than those in the nearby adipose tissue. Estimated average doses for the homogeneous breast models were almost identical to those for the structured breast models (p=1). Normalized average doses estimated for the ellipsoidal phantoms were similar to those for the structured breast models (root mean square (rms
Monte Carlo simulation of charge mediated magnetoelectricity in multiferroic bilayers
Energy Technology Data Exchange (ETDEWEB)
Ortiz-Álvarez, H.H. [Universidad de Caldas, Manizales (Colombia); Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia); Bedoya-Hincapié, C.M. [Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia); Universidad Santo Tomás, Bogotá (Colombia); Restrepo-Parra, E., E-mail: erestrepopa@unal.edu.co [Universidad Nacional de Colombia Sede Manizales, Manizales, Caldas (Colombia)
2014-12-01
Simulations of a bilayer ferroelectric/ferromagnetic multiferroic system were carried out, based on the Monte Carlo method and Metropolis dynamics. A generic model was implemented with a Janssen-like Hamiltonian, taking into account magnetoelectric interactions due to charge accumulation at the interface. Two different magnetic exchange constants were considered for accumulation and depletion states. Several screening lengths were also included. Simulations exhibit considerable magnetoelectric effects not only at low temperature, but also at temperature near to the transition point of the ferromagnetic layer. The results match experimental observations for this kind of structure and mechanism.
Geant4 based Monte Carlo simulation for verifying the modified sum-peak method.
Aso, Tsukasa; Ogata, Yoshimune; Makino, Ryuta
2017-09-14
The modified sum-peak method can practically estimate radioactivity by using solely the peak and the sum peak count rate. In order to efficiently verify the method in various experimental conditions, a Geant4 based Monte Carlo simulation for a high-purity germanium detector system was applied. The energy spectra in the detector were simulated for a 60Co point source in various source to detector distances. The calculated radioactivity shows good agreement with the number of decays in the simulation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Monte Carlo Simulation of Extreme Traffic Loading on Short and Medium Span Bridges
ENRIGHT, Bernard; O'Brien, Eugene J.
2012-01-01
The accurate estimation of site-specific lifetime extreme traffic load effects is an important element in the cost-effective assessment of bridges. A common approach is to use statistical distributions derived from weigh-in-motion measurements as the basis for Monte Carlo simulation of traffic loading. However, results are highly sensitive to the assumptions made, not just with regard to vehicle weights but also to axle configurations and gaps between vehicles. This paper presents a comprehen...
PHOTOS Monte Carlo for precision simulation of QED in decays
Was, Z; Nanava, G
2007-01-01
Because of properties of QED, the bremsstrahlung corrections to decays of particles or resonances can be calculated, with a good precision, separately from other effects. Thanks to the widespread use of event records such calculations can be embodied into a separate module of Monte Carlo simulation chains, as used in High Energy Experiments of today. The PHOTOS Monte Carlo program is used for this purpose since nearly 20 years now. In the following talk let us review the main ideas and constraints which shaped the program version of today and enabled it widespread use. We will concentrate specially on conflicting requirements originating from the properties of QED matrix elements on one side and degrading (evolving) with time standards of event record(s). These issues, quite common in other modular software applications, become more and more difficult to handle as precision requirements become higher.
Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds
Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.
2012-11-01
A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.
Application to radiation damage simulation calculation of Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Aruga, Takeo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
2001-01-01
Recent progress in Monte Carlo calculation for radiation damage simulation of structural materials to be used in fast breeder reactors or thermonuclear fusion reactors under energetic neutron or charged particle bombardment is reviewed. Specifically usefulness of employing Monte Carlo methods in molecular dynamics calculations to understand mechanical properties change such as dimensional change, strength, creep, fatigue, corrosion, and crack growth of materials under irradiation on the basis of atomic collision processes is stressed. Structure and spatial distribution of point defects in iron, gold, or cooper as demonstrative examples at several hundreds of ps after the movement of primary knock-on atom (PKA) takes place are calculated as a function of PKA energy. The results are compared with those obtained by the method developed by Norgett, Robinson and Torrens and the usefulness is discussed. (S. Ohno)
Accelerated Monte Carlo simulations with restricted Boltzmann machines
Huang, Li; Wang, Lei
2017-01-01
Despite their exceptional flexibility and popularity, Monte Carlo methods often suffer from slow mixing times for challenging statistical physics problems. We present a general strategy to overcome this difficulty by adopting ideas and techniques from the machine learning community. We fit the unnormalized probability of the physical model to a feed-forward neural network and reinterpret the architecture as a restricted Boltzmann machine. Then, exploiting its feature detection ability, we utilize the restricted Boltzmann machine to propose efficient Monte Carlo updates to speed up the simulation of the original physical system. We implement these ideas for the Falicov-Kimball model and demonstrate an improved acceptance ratio and autocorrelation time near the phase transition point.
The MCLIB library: Monte Carlo simulation of neutron scattering instruments
Energy Technology Data Exchange (ETDEWEB)
Seeger, P.A.
1995-09-01
Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC{_}RUN) which use the library are shown as an example.
Monte Carlo Molecular Simulation with Isobaric-Isothermal and Gibbs-NPT Ensembles
Du, Shouhong
2012-05-01
This thesis presents Monte Carlo methods for simulations of phase behaviors of Lennard-Jones fluids. The isobaric-isothermal (NPT) ensemble and Gibbs-NPT ensemble are introduced in detail. NPT ensemble is employed to determine the phase diagram of pure component. The reduced simulation results are verified by comparison with the equation of state by by Johnson et al. and results with L-J parameters of methane agree considerably with the experiment measurements. We adopt the blocking method for variance estimation and error analysis of the simulation results. The relationship between variance and number of Monte Carlo cycles, error propagation and Random Number Generator performance are also investigated. We review the Gibbs-NPT ensemble employed for phase equilibrium of binary mixture. The phase equilibrium is achieved by performing three types of trial move: particle displacement, volume rearrangement and particle transfer. The simulation models and the simulation details are introduced. The simulation results of phase coexistence for methane and ethane are reported with comparison of the experimental data. Good agreement is found for a wide range of pressures. The contribution of this thesis work lies in the study of the error analysis with respect to the Monte Carlo cycles and number of particles in some interesting aspects.
Monte Carlo Simulation of Emission Tomography and other Medical Imaging Techniques
Harrison, Robert L.
2010-01-01
As an introduction to Monte Carlo simulation of emission tomography, this paper reviews the history and principles of Monte Carlo simulation, then applies these principles to emission tomography using the public domain simulation package SimSET (a Simulation System for Emission Tomography) as an example. Finally, the paper discusses how the methods are modified for X-ray computed tomography and radiotherapy simulations.
Predicting the orientation of protein G B1 on hydrophobic surfaces using Monte Carlo simulations.
Harrison, Elisa T; Weidner, Tobias; Castner, David G; Interlandi, Gianluca
2016-12-06
A Monte Carlo algorithm was developed to predict the most likely orientations of protein G B1, an immunoglobulin G (IgG) antibody-binding domain of protein G, adsorbed onto a hydrophobic surface. At each Monte Carlo step, the protein was rotated and translated as a rigid body. The assumption about rigidity was supported by quartz crystal microbalance with dissipation monitoring experiments, which indicated that protein G B1 adsorbed on a polystyrene surface with its native structure conserved and showed that its IgG antibody-binding activity was retained. The Monte Carlo simulations predicted that protein G B1 is likely adsorbed onto a hydrophobic surface in two different orientations, characterized as two mutually exclusive sets of amino acids contacting the surface. This was consistent with sum frequency generation (SFG) vibrational spectroscopy results. In fact, theoretical SFG spectra calculated from an equal combination of the two predicted orientations exhibited reasonable agreement with measured spectra of protein G B1 on polystyrene surfaces. Also, in explicit solvent molecular dynamics simulations, protein G B1 maintained its predicted orientation in three out of four runs. This work shows that using a Monte Carlo approach can provide an accurate estimate of a protein orientation on a hydrophobic surface, which complements experimental surface analysis techniques and provides an initial system to study the interaction between a protein and a surface in molecular dynamics simulations.
Monte-Carlo simulations in a gas centrifuge; Simulations Monte-Carlo dans une centrifugeuse a gaz
Energy Technology Data Exchange (ETDEWEB)
Roblin, Ph.; Doneddu, F. [CEA Saclay, Dept. des Procedes d' Enrichissement (DCC/DPE/SPCP), 91 - Gif-sur-Yvette (France)
2000-07-01
This paper is associated with the centrifugation process for isotope separation, using the principle of a cylinder rotating at high speed in a vacuum casing. As in the most widely used configuration, the gas containing the isotope mixture is introduced by a fixed axial feed pipe and expands in the cylinder. It is subjected to high centrifugal acceleration, undergoes rigid body rotation and stratifies radially according to a barometric-type pressure law. By pressure diffusion, the heavier isotopes migrate to the cylinder wall and the lighter to the center. A temperature gradient on the wall and the presence of a scoop in the fluid, produce a vertical countercurrent which transforms the radial separation effect into an axial effect. The scoop extracts the gas depleted in light isotopes, called W, and another is used to recover the gas enriched in light isotopes, called P. Practically all the gas is governed by the Navier-Stokes equations in 2D axial symmetry. Due to the strong pressure stratification, continuous fluid equations are not valid in the whole cylinder, with or without linearization of the model. Consequently, an internal boundary separates the continuum domain from a rarefied domain in which the feed gas expands. The radial position of this cut-off then approaches the cylinder wall with increasing rotation speeds. In the rarefied domain, the Boltzmann equation is solved and a well suited numerical method is the Monte-Carlo method. A complete simulation of feed gas expansion and interaction with rotating gas, presented here with the DSMC (Direct Simulation Monte-Carlo) code, provides realistic boundary conditions for fluid flow calculations. The reference centrifuge is a hypothetical machine enabling the scientific community to compare results obtained for the optimization of separation performance. Its radius a is 6 cm, and its peripheral speed a is 600 m/s. The selected gas, containing the isotopes, is UF{sub 6}. The gas pressure p(a) at the cylinder
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Vrugt, Jasper A.; Madsen, Henrik
2008-01-01
within the context of Monte Carlo (MC) analysis coupled with Bayesian estimation and propagation of uncertainty. Because of its flexibility, ease of implementation and its suitability for parallel implementation on distributed computer systems, the GLUE method has been used in a wide variety...... that require significant computational time to run and produce the desired output. In this paper we improve the computational efficiency of GLUE by sampling the prior parameter space using an adaptive Markov Chain Monte Carlo scheme (the Shuffled Complex Evolution Metropolis (SCEM-UA) algorithm). Moreover, we......In the last few decades hydrologists have made tremendous progress in using dynamic simulation models for the analysis and understanding of hydrologic systems. However, predictions with these models are often deterministic and as such they focus on the most probable forecast, without an explicit...
GATE Monte Carlo simulation in a cloud computing environment
Rowedder, Blake Austin
The GEANT4-based GATE is a unique and powerful Monte Carlo (MC) platform, which provides a single code library allowing the simulation of specific medical physics applications, e.g. PET, SPECT, CT, radiotherapy, and hadron therapy. However, this rigorous yet flexible platform is used only sparingly in the clinic due to its lengthy calculation time. By accessing the powerful computational resources of a cloud computing environment, GATE's runtime can be significantly reduced to clinically feasible levels without the sizable investment of a local high performance cluster. This study investigated a reliable and efficient execution of GATE MC simulations using a commercial cloud computing services. Amazon's Elastic Compute Cloud was used to launch several nodes equipped with GATE. Job data was initially broken up on the local computer, then uploaded to the worker nodes on the cloud. The results were automatically downloaded and aggregated on the local computer for display and analysis. Five simulations were repeated for every cluster size between 1 and 20 nodes. Ultimately, increasing cluster size resulted in a decrease in calculation time that could be expressed with an inverse power model. Comparing the benchmark results to the published values and error margins indicated that the simulation results were not affected by the cluster size and thus that integrity of a calculation is preserved in a cloud computing environment. The runtime of a 53 minute long simulation was decreased to 3.11 minutes when run on a 20-node cluster. The ability to improve the speed of simulation suggests that fast MC simulations are viable for imaging and radiotherapy applications. With high power computing continuing to lower in price and accessibility, implementing Monte Carlo techniques with cloud computing for clinical applications will continue to become more attractive.
Optimal Run Strategies in Monte Carlo Iterated Fission Source Simulations
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K. [Argonne National Laboratory, Mathematics and Computer Science Division, 9700 South Cass Avenue, Lemont, Illinois 60439; Lund, Amanda L. [Argonne National Laboratory, Mathematics and Computer Science Division, 9700 South Cass Avenue, Lemont, Illinois 60439; Siegel, Andrew R. [Argonne National Laboratory, Mathematics and Computer Science Division, 9700 South Cass Avenue, Lemont, Illinois 60439
2017-06-19
The method of successive generations used in Monte Carlo simulations of nuclear reactor models is known to suffer from intergenerational correlation between the spatial locations of fission sites. One consequence of the spatial correlation is that the convergence rate of the variance of the mean for a tally becomes worse than O(N–1). In this work, we consider how the true variance can be minimized given a total amount of work available as a function of the number of source particles per generation, the number of active/discarded generations, and the number of independent simulations. We demonstrate through both analysis and simulation that under certain conditions the solution time for highly correlated reactor problems may be significantly reduced either by running an ensemble of multiple independent simulations or simply by increasing the generation size to the extent that it is practical. However, if too many simulations or too large a generation size is used, the large fraction of source particles discarded can result in an increase in variance. We also show that there is a strong incentive to reduce the number of generations discarded through some source convergence acceleration technique. Furthermore, we discuss the efficient execution of large simulations on a parallel computer; we argue that several practical considerations favor using an ensemble of independent simulations over a single simulation with very large generation size.
Monte Carlo simulation of a clearance box monitor used for nuclear power plant decommissioning.
Bochud, François O; Laedermann, Jean-Pascal; Bailat, Claude J; Schuler, Christoph
2009-05-01
When decommissioning a nuclear facility it is important to be able to estimate activity levels of potentially radioactive samples and compare with clearance values defined by regulatory authorities. This paper presents a method of calibrating a clearance box monitor based on practical experimental measurements and Monte Carlo simulations. Adjusting the simulation for experimental data obtained using a simple point source permits the computation of absolute calibration factors for more complex geometries with an accuracy of a bit more than 20%. The uncertainty of the calibration factor can be improved to about 10% when the simulation is used relatively, in direct comparison with a measurement performed in the same geometry but with another nuclide. The simulation can also be used to validate the experimental calibration procedure when the sample is supposed to be homogeneous but the calibration factor is derived from a plate phantom. For more realistic geometries, like a small gravel dumpster, Monte Carlo simulation shows that the calibration factor obtained with a larger homogeneous phantom is correct within about 20%, if sample density is taken as the influencing parameter. Finally, simulation can be used to estimate the effect of a contamination hotspot. The research supporting this paper shows that activity could be largely underestimated in the event of a centrally-located hotspot and overestimated for a peripherally-located hotspot if the sample is assumed to be homogeneously contaminated. This demonstrates the usefulness of being able to complement experimental methods with Monte Carlo simulations in order to estimate calibration factors that cannot be directly measured because of a lack of available material or specific geometries.
Monte Carlo Simulation for Statistical Decay of Compound Nucleus
Directory of Open Access Journals (Sweden)
Chadwick M.B.
2012-02-01
Full Text Available We perform Monte Carlo simulations for neutron and γ-ray emissions from a compound nucleus based on the Hauser-Feshbach statistical theory. This Monte Carlo Hauser-Feshbach (MCHF method calculation, which gives us correlated information between emitted particles and γ-rays. It will be a powerful tool in many applications, as nuclear reactions can be probed in a more microscopic way. We have been developing the MCHF code, CGM, which solves the Hauser-Feshbach theory with the Monte Carlo method. The code includes all the standard models that used in a standard Hauser-Feshbach code, namely the particle transmission generator, the level density module, interface to the discrete level database, and so on. CGM can emit multiple neutrons, as long as the excitation energy of the compound nucleus is larger than the neutron separation energy. The γ-ray competition is always included at each compound decay stage, and the angular momentum and parity are conserved. Some calculations for a fission fragment 140Xe are shown as examples of the MCHF method, and the correlation between the neutron and γ-ray is discussed.
Monte Carlo Simulation Tool Installation and Operation Guide
Energy Technology Data Exchange (ETDEWEB)
Aguayo Navarrete, Estanislao; Ankney, Austin S.; Berguson, Timothy J.; Kouzes, Richard T.; Orrell, John L.; Troy, Meredith D.; Wiseman, Clinton G.
2013-09-02
This document provides information on software and procedures for Monte Carlo simulations based on the Geant4 toolkit, the ROOT data analysis software and the CRY cosmic ray library. These tools have been chosen for its application to shield design and activation studies as part of the simulation task for the Majorana Collaboration. This document includes instructions for installation, operation and modification of the simulation code in a high cyber-security computing environment, such as the Pacific Northwest National Laboratory network. It is intended as a living document, and will be periodically updated. It is a starting point for information collection by an experimenter, and is not the definitive source. Users should consult with one of the authors for guidance on how to find the most current information for their needs.
Monte Carlo simulations for design of the KFUPM PGNAA facility
Naqvi, A A; Maslehuddin, M; Kidwai, S
2003-01-01
Monte Carlo simulations were carried out to design a 2.8 MeV neutron-based prompt gamma ray neutron activation analysis (PGNAA) setup for elemental analysis of cement samples. The elemental analysis was carried out using prompt gamma rays produced through capture of thermal neutrons in sample nuclei. The basic design of the PGNAA setup consists of a cylindrical cement sample enclosed in a cylindrical high-density polyethylene moderator placed between a neutron source and a gamma ray detector. In these simulations the predominant geometrical parameters of the PGNAA setup were optimized, including moderator size, sample size and shielding of the detector. Using the results of the simulations, an experimental PGNAA setup was then fabricated at the 350 kV Accelerator Laboratory of this University. The design calculations were checked experimentally through thermal neutron flux measurements inside the PGNAA moderator. A test prompt gamma ray spectrum of the PGNAA setup was also acquired from a Portland cement samp...
Monte Carlo simulations and dosimetric studies of an irradiation facility
Belchior, A.; Botelho, M. L.; Vaz, P.
2007-09-01
There is an increasing utilization of ionizing radiation for industrial applications. Additionally, the radiation technology offers a variety of advantages in areas, such as sterilization and food preservation. For these applications, dosimetric tests are of crucial importance in order to assess the dose distribution throughout the sample being irradiated. The use of Monte Carlo methods and computational tools in support of the assessment of the dose distributions in irradiation facilities can prove to be economically effective, representing savings in the utilization of dosemeters, among other benefits. One of the purposes of this study is the development of a Monte Carlo simulation, using a state-of-the-art computational tool—MCNPX—in order to determine the dose distribution inside an irradiation facility of Cobalt 60. This irradiation facility is currently in operation at the ITN campus and will feature an automation and robotics component, which will allow its remote utilization by an external user, under REEQ/996/BIO/2005 project. The detailed geometrical description of the irradiation facility has been implemented in MCNPX, which features an accurate and full simulation of the electron-photon processes involved. The validation of the simulation results obtained was performed by chemical dosimetry methods, namely a Fricke solution. The Fricke dosimeter is a standard dosimeter and is widely used in radiation processing for calibration purposes.
Proceedings of the first symposium on Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
NONE
2001-01-01
The first symposium on Monte Carlo simulation was held at Mitsubishi Research Institute, Otemachi, Tokyo, on 10th and 11st of September, 1998. This symposium was organized by Nuclear Code Research Committee at Japan Atomic Energy Research Institute. In the sessions, were presented orally 21 papers on code development, parallel calculation, reactor physics, burn-up, criticality, shielding safety, dose evaluation, nuclear fusion reactor, thermonuclear fusion plasma, nuclear transmutation, electromagnetic cascade, fuel cycle facility. Those presented papers are compiled in this proceedings. The 21 of the presented papers are indexed individually. (J.P.N.)
Monte Carlo simulation of particle-induced bit upsets
Directory of Open Access Journals (Sweden)
Wrobel Frédéric
2017-01-01
Full Text Available We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER for a given device in a given environment.
A fitter use of Monte Carlo simulations in regression models
Directory of Open Access Journals (Sweden)
Alessandro Ferrarini
2011-12-01
Full Text Available In this article, I focus on the use of Monte Carlo simulations (MCS within regression models, being this application very frequent in biology, ecology and economy as well. I'm interested in enhancing a typical fault in this application of MCS, i.e. the inner correlations among independent variables are not used when generating random numbers that fit their distributions. By means of an illustrative example, I provide proof that the misuse of MCS in regression models produces misleading results. Furthermore, I also provide a solution for this topic.
New electron multiple scattering distributions for Monte Carlo transport simulation
Energy Technology Data Exchange (ETDEWEB)
Chibani, Omar (Haut Commissariat a la Recherche (C.R.S.), 2 Boulevard Franz Fanon, Alger B.P. 1017, Alger-Gare (Algeria)); Patau, Jean Paul (Laboratoire de Biophysique et Biomathematiques, Faculte des Sciences Pharmaceutiques, Universite Paul Sabatier, 35 Chemin des Maraichers, 31062 Toulouse cedex (France))
1994-10-01
New forms of electron (positron) multiple scattering distributions are proposed. The first is intended for use in the conditions of validity of the Moliere theory. The second distribution takes place when the electron path is so short that only few elastic collisions occur. These distributions are adjustable formulas. The introduction of some parameters allows impositions of the correct value of the first moment. Only positive and analytic functions were used in constructing the present expressions. This makes sampling procedures easier. Systematic tests are presented and some Monte Carlo simulations, as benchmarks, are carried out. ((orig.))
Monte Carlo simulation of AB-copolymers with saturating bonds
DEFF Research Database (Denmark)
Chertovich, A.C.; Ivanov, V.A.; Khokhlov, A.R.
2003-01-01
Structural transitions in a single AB-copolymer chain where saturating bonds can be formed between A- and B-units are studied by means of Monte Carlo computer simulations using the bond fluctuation model. Three transitions are found, coil-globule, coil-hairpin and globule-hairpin, depending...... on the nature of a particular AB-sequence: statistical random sequence, diblock sequence and 'random-complementary' sequence (one-half of such an AB-sequence is random with Bernoulli statistics while the other half is complementary to the first one). The properties of random-complementary sequences are closer...
Analysis of fatigue fractographic data of a rod end housing using a Monte Carlo simulation
Shgimokawa, Toshiyuki; Kakuta, Yoshiaki
1994-02-01
This paper presents a new method using a Monte Carlo simulation to estimate a life distribution of fatigue crack propagation on the basis of crack length versus striation spacing data. This simulation is based on the distributions of two parameter estimates of a regression line and the reasonable correlation of the two parameter estimates. One cycle of the Monte Carlo scheme generates a set of parameter estimates which give a life of fatigue crack propagation. The analyzed data were obtained by scanning electron microscope (SEM) observation of a fatigue fracture surface of the rod end housing of a hydraulic actuator, which was used for a main landing gear in transport aircraft. A conventional regression analysis provides a set of two deterministic-parameter estimates, a life estimate of fatigue crack propagation, and the statistical properties of striation spacing. Stochastic-process modes of crack growth and practical probabilistic methods including the proposed method are used to estimate the life distributions of fatigue crack propagation on the basis of the results of the regression analysis. The obtained results are discussed and compared. The proposed method approximates the fatigue life of the rod end housing as the B-allowable life when the initial crack length is assumed to be 0 mm.
Directory of Open Access Journals (Sweden)
Eric Dumonteil
2017-09-01
Full Text Available The Monte Carlo criticality simulation of decoupled systems, as for instance in large reactor cores, has been a challenging issue for a long time. In particular, due to limited computer time resources, the number of neutrons simulated per generation is still many order of magnitudes below realistic statistics, even during the start-up phases of reactors. This limited number of neutrons triggers a strong clustering effect of the neutron population that affects Monte Carlo tallies. Below a certain threshold, not only is the variance affected but also the estimation of the eigenvectors. In this paper we will build a time-dependent diffusion equation that takes into account both spatial correlations and population control (fixed number of neutrons along generations. We will show that its solution obeys a traveling wave dynamic, and we will discuss the mechanism that explains this biasing of local tallies whenever leakage boundary conditions are applied to the system.
Asteroid mass estimation with Markov-chain Monte Carlo
Siltala, Lauri; Granvik, Mikael
2017-10-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem at minimum where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid by fitting their trajectories to their observed positions. The fitting has typically been carried out with linearized methods such as the least-squares method. These methods need to make certain assumptions regarding the shape of the probability distributions of the model parameters. This is problematic as these assumptions have not been validated. We have developed a new Markov-chain Monte Carlo method for mass estimation which does not require an assumption regarding the shape of the parameter distribution. Recently, we have implemented several upgrades to our MCMC method including improved schemes for handling observational errors and outlier data alongside the option to consider multiple perturbers and/or test asteroids simultaneously. These upgrades promise significantly improved results: based on two separate results for (19) Fortuna with different test asteroids we previously hypothesized that simultaneous use of both test asteroids would lead to an improved result similar to the average literature value for (19) Fortuna with substantially reduced uncertainties. Our upgraded algorithm indeed finds a result essentially equal to the literature value for this asteroid, confirming our previous hypothesis. Here we show these new results for (19) Fortuna and other example cases, and compare our results to previous estimates. Finally, we discuss our plans to improve our algorithm further, particularly in connection with Gaia.
Three-dimensional hypersonic rarefied flow calculations using direct simulation Monte Carlo method
Celenligil, M. Cevdet; Moss, James N.
1993-01-01
A summary of three-dimensional simulations on the hypersonic rarefied flows in an effort to understand the highly nonequilibrium flows about space vehicles entering the Earth's atmosphere for a realistic estimation of the aerothermal loads is presented. Calculations are performed using the direct simulation Monte Carlo method with a five-species reacting gas model, which accounts for rotational and vibrational internal energies. Results are obtained for the external flows about various bodies in the transitional flow regime. For the cases considered, convective heating, flowfield structure and overall aerodynamic coefficients are presented and comparisons are made with the available experimental data. The agreement between the calculated and measured results are very good.
The Monte Carlo simulation of the Borexino detector
Agostini, M.; Altenmüller, K.; Appel, S.; Atroshchenko, V.; Bagdasarian, Z.; Basilico, D.; Bellini, G.; Benziger, J.; Bick, D.; Bonfini, G.; Borodikhina, L.; Bravo, D.; Caccianiga, B.; Calaprice, F.; Caminata, A.; Canepa, M.; Caprioli, S.; Carlini, M.; Cavalcante, P.; Chepurnov, A.; Choi, K.; D'Angelo, D.; Davini, S.; Derbin, A.; Ding, X. F.; Di Noto, L.; Drachnev, I.; Fomenko, K.; Formozov, A.; Franco, D.; Froborg, F.; Gabriele, F.; Galbiati, C.; Ghiano, C.; Giammarchi, M.; Goeger-Neff, M.; Goretti, A.; Gromov, M.; Hagner, C.; Houdy, T.; Hungerford, E.; Ianni, Aldo; Ianni, Andrea; Jany, A.; Jeschke, D.; Kobychev, V.; Korablev, D.; Korga, G.; Kryn, D.; Laubenstein, M.; Litvinovich, E.; Lombardi, F.; Lombardi, P.; Ludhova, L.; Lukyanchenko, G.; Machulin, I.; Magnozzi, M.; Manuzio, G.; Marcocci, S.; Martyn, J.; Meroni, E.; Meyer, M.; Miramonti, L.; Misiaszek, M.; Muratova, V.; Neumair, B.; Oberauer, L.; Opitz, B.; Ortica, F.; Pallavicini, M.; Papp, L.; Pocar, A.; Ranucci, G.; Razeto, A.; Re, A.; Romani, A.; Roncin, R.; Rossi, N.; Schönert, S.; Semenov, D.; Shakina, P.; Skorokhvatov, M.; Smirnov, O.; Sotnikov, A.; Stokes, L. F. F.; Suvorov, Y.; Tartaglia, R.; Testera, G.; Thurn, J.; Toropova, M.; Unzhakov, E.; Vishneva, A.; Vogelaar, R. B.; von Feilitzsch, F.; Wang, H.; Weinz, S.; Wojcik, M.; Wurm, M.; Yokley, Z.; Zaimidoroga, O.; Zavatarelli, S.; Zuber, K.; Zuzel, G.
2018-01-01
We describe the Monte Carlo (MC) simulation of the Borexino detector and the agreement of its output with data. The Borexino MC "ab initio" simulates the energy loss of particles in all detector components and generates the resulting scintillation photons and their propagation within the liquid scintillator volume. The simulation accounts for absorption, reemission, and scattering of the optical photons and tracks them until they either are absorbed or reach the photocathode of one of the photomultiplier tubes. Photon detection is followed by a comprehensive simulation of the readout electronics response. The MC is tuned using data collected with radioactive calibration sources deployed inside and around the scintillator volume. The simulation reproduces the energy response of the detector, its uniformity within the fiducial scintillator volume relevant to neutrino physics, and the time distribution of detected photons to better than 1% between 100 keV and several MeV. The techniques developed to simulate the Borexino detector and their level of refinement are of possible interest to the neutrino community, especially for current and future large-volume liquid scintillator experiments such as Kamland-Zen, SNO+, and Juno.
The proton therapy nozzles at Samsung Medical Center: A Monte Carlo simulation study using TOPAS
Chung, Kwangzoo; Kim, Jinsung; Kim, Dae-Hyun; Ahn, Sunghwan; Han, Youngyih
2015-07-01
To expedite the commissioning process of the proton therapy system at Samsung Medical Center (SMC), we have developed a Monte Carlo simulation model of the proton therapy nozzles by using TOol for PArticle Simulation (TOPAS). At SMC proton therapy center, we have two gantry rooms with different types of nozzles: a multi-purpose nozzle and a dedicated scanning nozzle. Each nozzle has been modeled in detail following the geometry information provided by the manufacturer, Sumitomo Heavy Industries, Ltd. For this purpose, the novel features of TOPAS, such as the time feature or the ridge filter class, have been used, and the appropriate physics models for proton nozzle simulation have been defined. Dosimetric properties, like percent depth dose curve, spreadout Bragg peak (SOBP), and beam spot size, have been simulated and verified against measured beam data. Beyond the Monte Carlo nozzle modeling, we have developed an interface between TOPAS and the treatment planning system (TPS), RayStation. An exported radiotherapy (RT) plan from the TPS is interpreted by using an interface and is then translated into the TOPAS input text. The developed Monte Carlo nozzle model can be used to estimate the non-beam performance, such as the neutron background, of the nozzles. Furthermore, the nozzle model can be used to study the mechanical optimization of the design of the nozzle.
Hidayat, Iki; Sutopo; Pratama, Heru Berian
2017-12-01
The Kerinci geothermal field is one phase liquid reservoir system in the Kerinci District, western part of Jambi Province. In this field, there are geothermal prospects that identified by the heat source up flow inside a National Park area. Kerinci field was planned to develop 1×55 MWe by Pertamina Geothermal Energy. To define reservoir characterization, the numerical simulation of Kerinci field is developed by using TOUGH2 software with information from conceptual model. The pressure and temperature profile well data of KRC-B1 are validated with simulation data to reach natural state condition. The result of the validation is suitable matching. Based on natural state simulation, the resource assessment of Kerinci geothermal field is estimated by using Monte Carlo simulation with the result P10-P50-P90 are 49.4 MW, 64.3 MW and 82.4 MW respectively. This paper is the first study of resource assessment that has been estimated successfully in Kerinci Geothermal Field using numerical simulation coupling with Monte carlo simulation.
Monte Carlo simulations of the radiation environment for the CMS Experiment
AUTHOR|(CDS)2068566; Bayshev, I.; Bergstrom, I.; Cooijmans, T.; Dabrowski, A.; Glöggler, L.; Guthoff, M.; Kurochkin, I.; Vincke, H.; Tajeda, S.
2016-01-01
Monte Carlo radiation transport codes are used by the CMS Beam Radiation Instrumentation and Luminosity (BRIL) project to estimate the radiation levels due to proton-proton collisions and machine induced background. Results are used by the CMS collaboration for various applications: comparison with detector hit rates, pile-up studies, predictions of radiation damage based on various models (Dose, NIEL, DPA), shielding design, estimations of residual dose environment. Simulation parameters, and the maintenance of the input files are summarised, and key results are presented. Furthermore, an overview of additional programs developed by the BRIL project to meet the specific needs of CMS community is given.
Diffraction enhanced breast imaging through Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Cunha, D.M. [Departamento de Fisica e Matematica, FFCLRP, 14040-901 Universidade de Sao Paulo, Ribeirao Preto, SP (Brazil); Instituto de Fisica, Universidade Federal de Uberlandia, 38400-902, Uberlandia, MG (Brazil); Tomal, A. [Departamento de Fisica e Matematica, FFCLRP, 14040-901 Universidade de Sao Paulo, Ribeirao Preto, SP (Brazil); Poletti, M.E., E-mail: poletti@ffclrp.usp.br [Departamento de Fisica e Matematica, FFCLRP, 14040-901 Universidade de Sao Paulo, Ribeirao Preto, SP (Brazil)
2011-10-01
In this work, the potential use of diffraction effects from elastic scattering for breast imaging through Monte Carlo (MC) simulations was studied. The geometrical model of the compressed breast consisted of a semi-infinite layer, composed of a mixture of adipose and glandular tissue, with five spherical objects within it, simulating different tissue compositions. A pencil beam scanned the breast surface, impinging normally on it. Two receptors were placed under the breast: the first one detected primary photons, while the other detected the scattered photons. Two images of the breast were then obtained, a primary and a scatter image. Results showed that the scatter image provided values of contrast greater than that of primary image, with the possibility to enhance the contribution of a specific breast tissue to image formation. Nevertheless, scatter images also show considerably higher noise. The results obtained indicate that elastic scattering has a great potential to aid in the enhancement of the mammographic image.
Monte Carlo simulations of nanoscale focused neon ion beam sputtering.
Timilsina, Rajendra; Rack, Philip D
2013-12-13
A Monte Carlo simulation is developed to model the physical sputtering of aluminum and tungsten emulating nanoscale focused helium and neon ion beam etching from the gas field ion microscope. Neon beams with different beam energies (0.5-30 keV) and a constant beam diameter (Gaussian with full-width-at-half-maximum of 1 nm) were simulated to elucidate the nanostructure evolution during the physical sputtering of nanoscale high aspect ratio features. The aspect ratio and sputter yield vary with the ion species and beam energy for a constant beam diameter and are related to the distribution of the nuclear energy loss. Neon ions have a larger sputter yield than the helium ions due to their larger mass and consequently larger nuclear energy loss relative to helium. Quantitative information such as the sputtering yields, the energy-dependent aspect ratios and resolution-limiting effects are discussed.
Treatment planning in radiosurgery: parallel Monte Carlo simulation software
Energy Technology Data Exchange (ETDEWEB)
Scielzo, G. [Galliera Hospitals, Genova (Italy). Dept. of Hospital Physics; Grillo Ruggieri, F. [Galliera Hospitals, Genova (Italy) Dept. for Radiation Therapy; Modesti, M.; Felici, R. [Electronic Data System, Rome (Italy); Surridge, M. [University of South Hampton (United Kingdom). Parallel Apllication Centre
1995-12-01
The main objective of this research was to evaluate the possibility of direct Monte Carlo simulation for accurate dosimetry with short computation time. We made us of: graphics workstation, linear accelerator, water, PMMA and anthropomorphic phantoms, for validation purposes; ionometric, film and thermo-luminescent techniques, for dosimetry; treatment planning system for comparison. Benchmarking results suggest that short computing times can be obtained with use of the parallel version of EGS4 that was developed. Parallelism was obtained assigning simulation incident photons to separate processors, and the development of a parallel random number generator was necessary. Validation consisted in: phantom irradiation, comparison of predicted and measured values good agreement in PDD and dose profiles. Experiments on anthropomorphic phantoms (with inhomogeneities) were carried out, and these values are being compared with results obtained with the conventional treatment planning system.
Markov chain Monte Carlo simulation for Bayesian Hidden Markov Models
Chan, Lay Guat; Ibrahim, Adriana Irawati Nur Binti
2016-10-01
A hidden Markov model (HMM) is a mixture model which has a Markov chain with finite states as its mixing distribution. HMMs have been applied to a variety of fields, such as speech and face recognitions. The main purpose of this study is to investigate the Bayesian approach to HMMs. Using this approach, we can simulate from the parameters' posterior distribution using some Markov chain Monte Carlo (MCMC) sampling methods. HMMs seem to be useful, but there are some limitations. Therefore, by using the Mixture of Dirichlet processes Hidden Markov Model (MDPHMM) based on Yau et. al (2011), we hope to overcome these limitations. We shall conduct a simulation study using MCMC methods to investigate the performance of this model.
Jennings, E.; Madigan, M.
2017-04-01
Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called ;Likelihood free; as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted
Radiation doses in volume-of-interest breast computed tomography--A Monte Carlo simulation study.
Lai, Chao-Jen; Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C
2015-06-01
Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region's visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Electron-Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm(2) field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical measurements were 0.97 ± 0.03 and 1.10
Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study
Energy Technology Data Exchange (ETDEWEB)
Lai, Chao-Jen, E-mail: cjlai3711@gmail.com; Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C. [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)
2015-06-15
Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm{sup 2} field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical
Spatial distribution of reflected gamma rays by Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Jehouani, A. [LPTN, Departement de Physique, Faculte des Sciences Semlalia, B.P. 2390, 40000 Marrakech (Morocco)], E-mail: jehouani@ucam.ac.ma; Merzouki, A. [LPTN, Departement de Physique, Faculte des Sciences Semlalia, B.P. 2390, 40000 Marrakech (Morocco); Remote Sensing and Geomatics of the Environment Laboratory, Ottawa-Carleton Geoscience Centre, Marion Hall, 140 Louis Pasteur, Ottawa, ON, KIN 6N5 (Canada); Boutadghart, F.; Ghassoun, J. [LPTN, Departement de Physique, Faculte des Sciences Semlalia, B.P. 2390, 40000 Marrakech (Morocco)
2007-10-15
In nuclear facilities, the reflection of gamma rays of the walls and metals constitutes an unknown origin of radiation. These reflected gamma rays must be estimated and determined. This study concerns reflected gamma rays on metal slabs. We evaluated the spatial distribution of the reflected gamma rays spectra by using the Monte Carlo method. An appropriate estimator for the double differential albedo is used to determine the energy spectra and the angular distribution of reflected gamma rays by slabs of iron and aluminium. We took into the account the principal interactions of gamma rays with matter: photoelectric, coherent scattering (Rayleigh), incoherent scattering (Compton) and pair creation. The Klein-Nishina differential cross section was used to select direction and energy of scattered photons after each Compton scattering. The obtained spectra show peaks at 0.511{sup *} MeV for higher source energy. The Results are in good agreement with those obtained by the TRIPOLI code [J.C. Nimal et al., TRIPOLI02: Programme de Monte Carlo Polycinsetique a Trois dimensions, CEA Rapport, Commissariat a l'Energie Atomique. ].
DYNAMIC PARAMETERS ESTIMATION OF INTERFEROMETRIC SIGNALS BASED ON SEQUENTIAL MONTE CARLO METHOD
Directory of Open Access Journals (Sweden)
M. A. Volynsky
2014-05-01
Full Text Available The paper deals with sequential Monte Carlo method applied to problem of interferometric signals parameters estimation. The method is based on the statistical approximation of the posterior probability density distribution of parameters. Detailed description of the algorithm is given. The possibility of using the residual minimum between prediction and observation as a criterion for the selection of multitude elements generated at each algorithm step is shown. Analysis of input parameters influence on performance of the algorithm has been conducted. It was found that the standard deviation of the amplitude estimation error for typical signals is about 10% of the maximum amplitude value. The phase estimation error was shown to have a normal distribution. Analysis of the algorithm characteristics depending on input parameters is done. In particular, the influence analysis for a number of selected vectors of parameters on evaluation results is carried out. On the basis of simulation results for the considered class of signals, it is recommended to select 30% of the generated vectors number. The increase of the generated vectors number over 150 does not give significant improvement of the obtained estimates quality. The sequential Monte Carlo method is recommended for usage in dynamic processing of interferometric signals for the cases when high immunity is required to non-linear changes of signal parameters and influence of random noise.
Evaluation of effective dose with chest digital tomosynthesis system using Monte Carlo simulation
Kim, Dohyeon; Jo, Byungdu; Lee, Youngjin; Park, Su-Jin; Lee, Dong-Hoon; Kim, Hee-Joung
2015-03-01
Chest digital tomosynthesis (CDT) system has recently been introduced and studied. This system offers the potential to be a substantial improvement over conventional chest radiography for the lung nodule detection and reduces the radiation dose with limited angles. PC-based Monte Carlo program (PCXMC) simulation toolkit (STUK, Helsinki, Finland) is widely used to evaluate radiation dose in CDT system. However, this toolkit has two significant limits. Although PCXMC is not possible to describe a model for every individual patient and does not describe the accurate X-ray beam spectrum, Geant4 Application for Tomographic Emission (GATE) simulation describes the various size of phantom for individual patient and proper X-ray spectrum. However, few studies have been conducted to evaluate effective dose in CDT system with the Monte Carlo simulation toolkit using GATE. The purpose of this study was to evaluate effective dose in virtual infant chest phantom of posterior-anterior (PA) view in CDT system using GATE simulation. We obtained the effective dose at different tube angles by applying dose actor function in GATE simulation which was commonly used to obtain the medical radiation dosimetry. The results indicated that GATE simulation was useful to estimate distribution of absorbed dose. Consequently, we obtained the acceptable distribution of effective dose at each projection. These results indicated that GATE simulation can be alternative method of calculating effective dose in CDT applications.
Monte Carlo simulation of gamma ray tomography for image reconstruction
Energy Technology Data Exchange (ETDEWEB)
Guedes, Karlos A.N.; Moura, Alex; Dantas, Carlos; Melo, Silvio; Lima, Emerson, E-mail: karlosguedes@hotmail.com [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil); Meric, Ilker [University of Bergen (Norway)
2015-07-01
The Monte Carlo simulations of known density and shape object was validate with Gamma Ray Tomography in static experiments. An aluminum half-moon piece placed inside a steel pipe was the MC simulation test object that was also measured by means of gamma ray transmission. Wall effect of the steel pipe due to irradiation geometry in a single pair source-detector tomography was evaluated by comparison with theoretical data. MCNPX code requires a defined geometry to each photon trajectory which practically prevents this usage for tomography reconstruction simulation. The solution was found by writing a program in Delphi language to create input files automation code. Simulations of tomography data by automated MNCPX code were carried out and validated by experimental data. Working in this sequence the produced data needed a databank to be stored. Experimental setup used a Cesium-137 isotopic radioactive source (7.4 × 109 Bq), and NaI(Tl) scintillation detector of (51 × 51) × 10−3 m crystal size coupled to a multichannel analyzer. A stainless steel tubes of 0,154 m internal diameter, 0.014 m thickness wall. The results show that the MCNPX simulation code adapted to automated input file is useful for generating a matrix data M(θ,t), of a computerized gamma ray tomography for any known density and regular shape object. Experimental validation used RMSE from gamma ray paths and from attenuation coefficient data. (author)
Monte Carlo simulation of light fluence calculation during pleural PDT
Meo, Julia L.; Zhu, Timothy
2013-03-01
A thorough understanding of light distribution in the desired tissue is necessary for accurate light dosimetry in PDT. Solving the problem of light dose depends, in part, on the geometry of the tissue to be treated. When considering PDT in the thoracic cavity for treatment of malignant, localized tumors such as those observed in malignant pleural mesothelioma (MPM), changes in light dose caused by the cavity geometry should be accounted for in order to improve treatment efficacy. Cavity-like geometries demonstrate what is known as the "integrating sphere effect" where multiple light scattering off the cavity walls induces an overall increase in light dose in the cavity. We present a Monte Carlo simulation of light fluence based on a spherical and an elliptical cavity geometry with various dimensions. The tissue optical properties as well as the non-scattering medium (air and water) varies. We have also introduced small absorption inside the cavity to simulate the effect of blood absorption. We expand the MC simulation to track photons both within the cavity and in the surrounding cavity walls. Simulations are run for a variety of cavity optical properties determined using spectroscopic methods. We concluded from the MC simulation that the light fluence inside the cavity is inversely proportional to the surface area.
The impact of Monte Carlo simulation: a scientometric analysis of scholarly literature
Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V
2010-01-01
A scientometric analysis of Monte Carlo simulation and Monte Carlo codes has been performed over a set of representative scholarly journals related to radiation physics. The results of this study are reported and discussed. They document and quantitatively appraise the role of Monte Carlo methods and codes in scientific research and engineering applications.
Learning About Ares I from Monte Carlo Simulation
Hanson, John M.; Hall, Charlie E.
2008-01-01
This paper addresses Monte Carlo simulation analyses that are being conducted to understand the behavior of the Ares I launch vehicle, and to assist with its design. After describing the simulation and modeling of Ares I, the paper addresses the process used to determine what simulations are necessary, and the parameters that are varied in order to understand how the Ares I vehicle will behave in flight. Outputs of these simulations furnish a significant group of design customers with data needed for the development of Ares I and of the Orion spacecraft that will ride atop Ares I. After listing the customers, examples of many of the outputs are described. Products discussed in this paper include those that support structural loads analysis, aerothermal analysis, flight control design, failure/abort analysis, determination of flight performance reserve, examination of orbit insertion accuracy, determination of the Upper Stage impact footprint, analysis of stage separation, analysis of launch probability, analysis of first stage recovery, thrust vector control and reaction control system design, liftoff drift analysis, communications analysis, umbilical release, acoustics, and design of jettison systems.
Abbas, Ismail; Rovira, Joan; Casanovas, Josep
2007-05-01
The patient recruitment process of clinical trials is an essential element which needs to be designed properly. In this paper we describe different simulation models under continuous and discrete time assumptions for the design of recruitment in clinical trials. The results of hypothetical examples of clinical trial recruitments are presented. The recruitment time is calculated and the number of recruited patients is quantified for a given time and probability of recruitment. The expected delay and the effective recruitment durations are estimated using both continuous and discrete time modeling. The proposed type of Monte Carlo simulation Markov models will enable optimization of the recruitment process and the estimation and the calibration of its parameters to aid the proposed clinical trials. A continuous time simulation may minimize the duration of the recruitment and, consequently, the total duration of the trial.
Energy Technology Data Exchange (ETDEWEB)
Jennings, E. [Fermilab; Madigan, M. [Trinity Coll., Dublin
2017-01-18
Given the complexity of modern cosmological parameter inference where we arefaced with non-Gaussian data and noise, correlated systematics and multi-probecorrelated data sets, the Approximate Bayesian Computation (ABC) method is apromising alternative to traditional Markov Chain Monte Carlo approaches in thecase where the Likelihood is intractable or unknown. The ABC method is called"Likelihood free" as it avoids explicit evaluation of the Likelihood by using aforward model simulation of the data which can include systematics. Weintroduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler forparameter estimation. A key challenge in astrophysics is the efficient use oflarge multi-probe datasets to constrain high dimensional, possibly correlatedparameter spaces. With this in mind astroABC allows for massive parallelizationusing MPI, a framework that handles spawning of jobs across multiple nodes. Akey new feature of astroABC is the ability to create MPI groups with differentcommunicators, one for the sampler and several others for the forward modelsimulation, which speeds up sampling time considerably. For smaller jobs thePython multiprocessing option is also available. Other key features include: aSequential Monte Carlo sampler, a method for iteratively adapting tolerancelevels, local covariance estimate using scikit-learn's KDTree, modules forspecifying optimal covariance matrix for a component-wise or multivariatenormal perturbation kernel, output and restart files are backed up everyiteration, user defined metric and simulation methods, a module for specifyingheterogeneous parameter priors including non-standard prior PDFs, a module forspecifying a constant, linear, log or exponential tolerance level,well-documented examples and sample scripts. This code is hosted online athttps://github.com/EliseJ/astroABC
Alpha Eigenvalue Estimation from Dynamic Monte Carlo Calculation for Subcritical Systems
Energy Technology Data Exchange (ETDEWEB)
Shaukat, Nadeem; Shim, Hyung Jin; Jang, Sang Hoon [Seoul National University, Seoul (Korea, Republic of)
2016-05-15
The dynamic Monte Carlo (DMC) method has been used in the TART code for the α eigenvalue calculations. A unique method has been equipped to measure the α in time-stepwise Monte Carlo simulations. For off-critical systems, the neutron population is allowed to change exponentially over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, the conventional dynamic Monte Carlo method has been implemented in the McCARD. There is an exponential change of neutron population at the end of each time boundary for off-critical systems. In order to control this exponential change at the end of each time boundary, a conventional time cut-off controlling population strategy is included in the DMC module implemented in the McCARD. the conventional combing method to control the neutron population for off-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. The prompt neutron decay constant α is estimated from DMC algorithm for subcritical systems. The effectiveness of the results is examined for two-group infinite homogeneous problems with varying the k-value. From the comparisons with the analytical solutions, it is observed that the results are quite comparable with each other for each k-value.
Monte Carlo simulation to analyze the performance of CPV modules
Herrero, Rebeca; Antón, Ignacio; Sala, Gabriel; De Nardis, Davide; Araki, Kenji; Yamaguchi, Masafumi
2017-09-01
A model to evaluate the performance of high concentrator photovoltaics (HCPV) modules (that generates current-voltage curves) has been applied together with a Monte Carlo approach to obtain a distribution of modules with a given set of characteristics (e.g., receivers electrical properties and misalignments within elementary units in modules) related to a manufacturing scenario. In this paper, the performance of CPV systems (tracker and inverter) that contain the set of simulated modules is evaluated depending on different system characteristics: inverter configuration, sorting of modules and bending of the tracker frame. Thus, the study of the HCPV technology regarding its angular constrains is fully covered by analyzing all the possible elements affecting the generated electrical power.
Monte Carlo simulations of solid-state photoswitches
Energy Technology Data Exchange (ETDEWEB)
Rambo, P.W.; Denavit, J.
1995-09-01
Large increases in conductivity induced in GaAs and other semiconductors by photoionization allow fast switching by laser light with applications to pulse-power technology and microwave generation. Experiments have shown that under high-field conditions (10 to 50 kV/cm), conductivity may occur either in the linear mode where it is proportional to the absorbed light, in the {open_quotes}lock-on{close_quotes} mode, where it persists after termination of the laser pulse or in the avalanche mode where multiple carriers are generated. We have assembled a self-consistent Monte Carlo code to study these phenomena and in particular to model hot electron effects, which are expected to be important at high field strengths. This project has also brought our expertise acquired in advanced particle simulation of plasmas to bear on the modeling of semiconductor devices, which has broad industrial applications.
Monte Carlo simulations of nematic and chiral nematic shells.
Wand, Charlie R; Bates, Martin A
2015-01-01
We present a systematic Monte Carlo simulation study of thin nematic and cholesteric shells with planar anchoring using an off-lattice model. The results obtained using the simple model correspond with previously published results for lattice-based systems, with the number, type, and position of defects observed dependent on the shell thickness with four half-strength defects in a tetrahedral arrangement found in very thin shells and a pair of defects in a bipolar (boojum) configuration observed in thicker shells. A third intermediate defect configuration is occasionally observed for intermediate thickness shells, which is stabilized in noncentrosymmetric shells of nonuniform thickness. Chiral nematic (cholesteric) shells are investigated by including a chiral term in the potential. Decreasing the pitch of the chiral nematic leads to a twisted bipolar (chiral boojum) configuration with the director twist increasing from the inner to the outer surface.
A Comparison of Experimental EPMA Data and Monte Carlo Simulations
Carpenter, P. K.
2004-01-01
Monte Carlo (MC) modeling shows excellent prospects for simulating electron scattering and x-ray emission from complex geometries, and can be compared to experimental measurements using electron-probe microanalysis (EPMA) and phi(rho z) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been used to develop phi(rho z) correction algorithms. The accuracy of MC calculations obtained using the NIST, WinCasino, WinXray, and Penelope MC packages will be evaluated relative to these experimental data. There is additional information contained in the extended abstract.
Monte Carlo modelling of Schottky diode for rectenna simulation
Bernuchon, E.; Aniel, F.; Zerounian, N.; Grimault-Jacquin, A. S.
2017-09-01
Before designing a detector circuit, the electrical parameters extraction of the Schottky diode is a critical step. This article is based on a Monte-Carlo (MC) solver of the Boltzmann Transport Equation (BTE) including different transport mechanisms at the metal-semiconductor contact such as image force effect or tunneling. The weight of tunneling and thermionic current is quantified according to different degrees of tunneling modelling. The I-V characteristic highlights the dependence of the ideality factor and the current saturation with bias. Harmonic Balance (HB) simulation on a rectifier circuit within Advanced Design System (ADS) software shows that considering non-linear ideality factor and saturation current for the electrical model of the Schottky diode does not seem essential. Indeed, bias independent values extracted in forward regime on I-V curve are sufficient. However, the non-linear series resistance extracted from a small signal analysis (SSA) strongly influences the conversion efficiency at low input powers.
Exploring a Parasite-Host Model with Monte Carlo Simulations
Breecher, Nyles; Dong, Jiajia
2011-03-01
We explore parasite-host interactions, a less investigated subset of the well-established predator-prey model. In particular, it is not well known how the numerous parameters of the system affect its characteristics. Parasite-host systems rely on their spatial interaction, as a parasite must make physical contact with the host to reproduce. Using C++ to program a Monte Carlo simulation, we study how the speed and type of movement of the host affect the spatial and temporal distribution of the parasites. By drawing on mean-field theoretics, we find the exact solution for the parasite distribution with a stationary host at the center and analyze the distributions for a moving host. The findings of the study provide rich behavior of a non-equilibrium system and bring insights to pest-control and, on a larger scale, epidemics spreading.
MONTE CARLO SIMULATION OF MULTIFOCAL STOCHASTIC SCANNING SYSTEM
Directory of Open Access Journals (Sweden)
LIXIN LIU
2014-01-01
Full Text Available Multifocal multiphoton microscopy (MMM has greatly improved the utilization of excitation light and imaging speed due to parallel multiphoton excitation of the samples and simultaneous detection of the signals, which allows it to perform three-dimensional fast fluorescence imaging. Stochastic scanning can provide continuous, uniform and high-speed excitation of the sample, which makes it a suitable scanning scheme for MMM. In this paper, the graphical programming language — LabVIEW is used to achieve stochastic scanning of the two-dimensional galvo scanners by using white noise signals to control the x and y mirrors independently. Moreover, the stochastic scanning process is simulated by using Monte Carlo method. Our results show that MMM can avoid oversampling or subsampling in the scanning area and meet the requirements of uniform sampling by stochastically scanning the individual units of the N × N foci array. Therefore, continuous and uniform scanning in the whole field of view is implemented.
A Monte Carlo tool to simulate breast cancer screening programmes
Forastero, C.; Zamora, L. I.; Guirado, D.; Lallena, A. M.
2010-09-01
A Monte Carlo tool which permits the simulation of screening mammography programmes is developed. Various statistical distributions describing different parameters involved in the problem are used: the characteristics of the population under study, a tumour growth model and a model for tumour detection based on parameters such as sensitivity and specificity which depends on the woman's age. We reproduce results of different actual programmes. The model enables us to find out the configuration (the age of the women who attend the screening trials and screening frequency) which produces maximum benefits with minimum risks. In addition, the model has permitted us to validate some of the assumed hypothesis, such as the probability distribution of the tumour detection as a function of the tumour size, the frequency of the histological types and the transition probability between different histological types.
Monte Carlo simulations and benchmark studies at CERN's accelerator chain
AUTHOR|(CDS)2083190; Brugger, Markus
2016-01-01
Mixed particle and energy radiation fields present at the Large Hadron Collider (LHC) and its accelerator chain are responsible for failures on electronic devices located in the vicinity of the accelerator beam lines. These radiation effects on electronics and, more generally, the overall radiation damage issues have a direct impact on component and system lifetimes, as well as on maintenance requirements and radiation exposure to personnel who have to intervene and fix existing faults. The radiation environments and respective radiation damage issues along the CERN’s accelerator chain were studied in the framework of the CERN Radiation to Electronics (R2E) project and are hereby presented. The important interplay between Monte Carlo simulations and radiation monitoring is also highlighted.
A Monte Carlo simulation technique for low-altitude, wind-shear turbulence
Bowles, Roland L.; Laituri, Tony R.; Trevino, George
1990-01-01
A case is made for including anisotropy in a Monte Carlo flight simulation scheme of low-altitude wind-shear turbulence by means of power spectral density. This study attempts to eliminate all flight simulation-induced deficiencies in the basic turbulence model. A full-scale low-altitude wind-shear turbulence simulation scheme is proposed with particular emphasis on low cost and practicality for near-ground flight. The power spectral density statistic is used to highlight the need for realistic estimates of energy transfer associated with low-altitude wind-shear turbulence. The simulation of a particular anisotropic turbulence model is shown to be a relatively simple extension from that of traditional isotropic (Dryden) turbulence.
Monte Carlo simulation of photon way in clinical laser therapy
Ionita, Iulian; Voitcu, Gabriel
2011-07-01
The multiple scattering of light can increase efficiency of laser therapy of inflammatory diseases enlarging the treated area. The light absorption is essential for treatment while scattering dominates. Multiple scattering effects must be introduced using the Monte Carlo method for modeling light transport in tissue and finally to calculate the optical parameters. Diffuse reflectance measurements were made on high concentrated live leukocyte suspensions in similar conditions as in-vivo measurements. The results were compared with the values determined by MC calculations, and the latter have been adjusted to match the specified values of diffuse reflectance. The principal idea of MC simulations applied to absorption and scattering phenomena is to follow the optical path of a photon through the turbid medium. The concentrated live cell solution is a compromise between homogeneous layer as in MC model and light-live cell interaction as in-vivo experiments. In this way MC simulation allow us to compute the absorption coefficient. The values of optical parameters, derived from simulation by best fitting of measured reflectance, were used to determine the effective cross section. Thus we can compute the absorbed radiation dose at cellular level.
Scalable Metropolis Monte Carlo for simulation of hard shapes
Anderson, Joshua A.; Eric Irrgang, M.; Glotzer, Sharon C.
2016-07-01
We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.
Mukumoto, Nobutaka; Tsujii, Katsutomo; Saito, Susumu; Yasunaga, Masayoshi; Takegawa, Hideki; Yamamoto, Tokihiro; Numasaki, Hodaka; Teshima, Teruki
2009-10-01
To develop an infrastructure for the integrated Monte Carlo verification system (MCVS) to verify the accuracy of conventional dose calculations, which often fail to accurately predict dose distributions, mainly due to inhomogeneities in the patient's anatomy, for example, in lung and bone. The MCVS consists of the graphical user interface (GUI) based on a computational environment for radiotherapy research (CERR) with MATLAB language. The MCVS GUI acts as an interface between the MCVS and a commercial treatment planning system to import the treatment plan, create MC input files, and analyze MC output dose files. The MCVS consists of the EGSnrc MC codes, which include EGSnrc/BEAMnrc to simulate the treatment head and EGSnrc/DOSXYZnrc to calculate the dose distributions in the patient/phantom. In order to improve computation time without approximations, an in-house cluster system was constructed. The phase-space data of a 6-MV photon beam from a Varian Clinac unit was developed and used to establish several benchmarks under homogeneous conditions. The MC results agreed with the ionization chamber measurements to within 1%. The MCVS GUI could import and display the radiotherapy treatment plan created by the MC method and various treatment planning systems, such as RTOG and DICOM-RT formats. Dose distributions could be analyzed by using dose profiles and dose volume histograms and compared on the same platform. With the cluster system, calculation time was improved in line with the increase in the number of central processing units (CPUs) at a computation efficiency of more than 98%. Development of the MCVS was successful for performing MC simulations and analyzing dose distributions.
A Monte Carlo Study of Marginal Maximum Likelihood Parameter Estimates for the Graded Model.
Ankenmann, Robert D.; Stone, Clement A.
Effects of test length, sample size, and assumed ability distribution were investigated in a multiple replication Monte Carlo study under the 1-parameter (1P) and 2-parameter (2P) logistic graded model with five score levels. Accuracy and variability of item parameter and ability estimates were examined. Monte Carlo methods were used to evaluate…
Fish, Laurel J.; Halcoussis, Dennis; Phillips, G. Michael
2017-01-01
The Monte Carlo method and related multiple imputation methods are traditionally used in math, physics and science to estimate and analyze data and are now becoming standard tools in analyzing business and financial problems. However, few sources explain the application of the Monte Carlo method for individuals and business professionals who are…
Equation of state of metallic hydrogen from coupled electron-ion Monte Carlo simulations.
Morales, Miguel A; Pierleoni, Carlo; Ceperley, D M
2010-02-01
We present a study of hydrogen at pressures higher than molecular dissociation using the coupled electron-ion Monte Carlo method. These calculations use the accurate reptation quantum Monte Carlo method to estimate the electronic energy and pressure while doing a Monte Carlo simulation of the protons. In addition to presenting simulation results for the equation of state over a large region of the phase diagram, we report the free energy obtained by thermodynamic integration. We find very good agreement with density-functional theory based molecular-dynamics calculations for pressures beyond 600 GPa and densities above rho=1.4 g/cm(3) , both for thermodynamic and structural properties. This agreement provides a strong support to the different approximations employed in the density-functional treatment of the system, specifically the approximate exchange-correlation potential and the use of pseudopotentials for the range of densities considered. We find disagreement with chemical models, which suggests that a reinvestigation of planetary models--previously constructed using the Saumon-Chabrier-Van Horn equations of state--might be needed.
Energy Technology Data Exchange (ETDEWEB)
Matsumiya, T. [Nippon Steel Corporation, Tokyo (Japan)
1996-08-20
The Monte Carlo method was used to simulate an equilibrium diagram, and structural formation of transformation and recrystallization. In simulating the Cu-A equilibrium diagram, the calculation was performed by laying 24 face centered cubic lattices including four lattice points in all of the three directions, and using a simulation cell consisting of lattice points of a total of 24{sup 3}{times}4 points. Although this method has a possibility to discover existence of an unknown phase as a result of the calculation, problems were found left in handling of lattice mitigation, and in simulation of phase diagrams over phases with different crystal structures. In simulation of the transformation and recrystallization, discussions were given on correspondence of 1MCS to time when the lattice point size is increased, and on handling of nucleus formation. As a result, it was estimated that in three-dimensional grain growth, the average grain size is proportional to 1/3 power of the MCS number, and the real time against 1MCS is proportional to three power of the lattice point size. 11 refs., 8 figs., 2 tabs.
Testing Lorentz Invariance Emergence in the Ising Model using Monte Carlo simulations
Dias Astros, Maria Isabel
2017-01-01
In the context of the Lorentz invariance as an emergent phenomenon at low energy scales to study quantum gravity a system composed by two 3D interacting Ising models (one with an anisotropy in one direction) was proposed. Two Monte Carlo simulations were run: one for the 2D Ising model and one for the target model. In both cases the observables (energy, magnetization, heat capacity and magnetic susceptibility) were computed for different lattice sizes and a Binder cumulant introduced in order to estimate the critical temperature of the systems. Moreover, the correlation function was calculated for the 2D Ising model.
Monte Carlo Simulation and Experimental Characterization of a Dual Head Gamma Camera
Rodrigues, S; Abreu, M C; Santos, N; Rato-Mendes, P; Peralta, L
2007-01-01
The GEANT4 Monte Carlo simulation and experimental characterization of the Siemens E.Cam Dual Head gamma camera hosted in the Particular Hospital of Algarve have been done. Imaging tests of thyroid and other phantoms have been made "in situ" and compared with the results obtained with the Monte Carlo simulation.
CONDENSED MONTE-CARLO SIMULATIONS FOR THE DESCRIPTION OF LIGHT TRANSPORT
GRAAFF, R; KOELINK, MH; DEMUL, FFM; ZIJLSTRA, WG; DASSEL, ACM; AARNOUDSE, JG
1993-01-01
A novel method, condensed Monte Carlo simulation, is presented that applies the results of a single Monte Carlo simulation for a given albedo mu(s)/(mu(a) + mu(s)) to obtaining results for other albedos; mu(s) and mu(a) are the scattering and absorption coefficients, respectively. The method
Monte Carlo Simulation for LINAC Standoff Interrogation of Nuclear Material
Energy Technology Data Exchange (ETDEWEB)
Clarke, Shaun D [ORNL; Flaska, Marek [ORNL; Miller, Thomas Martin [ORNL; Protopopescu, Vladimir A [ORNL; Pozzi, Sara A [ORNL
2007-06-01
The development of new techniques for the interrogation of shielded nuclear materials relies on the use of Monte Carlo codes to accurately simulate the entire system, including the interrogation source, the fissile target and the detection environment. The objective of this modeling effort is to develop analysis tools and methods-based on a relevant scenario-which may be applied to the design of future systems for active interrogation at a standoff. For the specific scenario considered here, the analysis will focus on providing the information needed to determine the type and optimum position of the detectors. This report describes the results of simulations for a detection system employing gamma rays to interrogate fissile and nonfissile targets. The simulations were performed using specialized versions of the codes MCNPX and MCNP-PoliMi. Both prompt neutron and gamma ray and delayed neutron fluxes have been mapped in three dimensions. The time dependence of the prompt neutrons in the system has also been characterized For this particular scenario, the flux maps generated with the Monte Carlo model indicate that the detectors should be placed approximately 50 cm behind the exit of the accelerator, 40 cm away from the vehicle, and 150 cm above the ground. This position minimizes the number of neutrons coming from the accelerator structure and also receives the maximum flux of prompt neutrons coming from the source. The lead shielding around the accelerator minimizes the gamma-ray background from the accelerator in this area. The number of delayed neutrons emitted from the target is approximately seven orders of magnitude less than the prompt neutrons emitted from the system. Therefore, in order to possibly detect the delayed neutrons, the detectors should be active only after all prompt neutrons have scattered out of the system. Preliminary results have shown this time to be greater than 5 ?s after the accelerator pulse. This type of system is illustrative of a
Optimizing the HLT Buffer Strategy with Monte Carlo Simulations
AUTHOR|(CDS)2266763
2017-01-01
This project aims to optimize the strategy of utilizing the disk buffer for the High Level Trigger (HLT) of the LHCb experiment with the help of Monte-Carlo simulations. A method is developed, which simulates the Event Filter Farm (EFF) -- a computing cluster for the High Level Trigger -- as a compound of nodes with different performance properties. In this way, the behavior of the computing farm can be analyzed at a deeper level than before. It is demonstrated that the current operating strategy might be improved when data taking is reaching a mid-year scheduled stop or the year-end technical stop. The processing time of the buffered data can be lowered by distributing the detector data according to the processing power of the nodes instead of the relative disk size as long as the occupancy level of the buffer is low enough. Moreover, this ensures that data taken and stored on the buffer at the same time is processed by different nodes nearly simultaneously, which reduces load on the infrastructure.
Monte Carlo simulation of dense polymer melts using event chain algorithms
Kampmann, Tobias A.; Boltz, Horst-Holger; Kierfeld, Jan
2015-07-01
We propose an efficient Monte Carlo algorithm for the off-lattice simulation of dense hard sphere polymer melts using cluster moves, called event chains, which allow for a rejection-free treatment of the excluded volume. Event chains also allow for an efficient preparation of initial configurations in polymer melts. We parallelize the event chain Monte Carlo algorithm to further increase simulation speeds and suggest additional local topology-changing moves ("swap" moves) to accelerate equilibration. By comparison with other Monte Carlo and molecular dynamics simulations, we verify that the event chain algorithm reproduces the correct equilibrium behavior of polymer chains in the melt. By comparing intrapolymer diffusion time scales, we show that event chain Monte Carlo algorithms can achieve simulation speeds comparable to optimized molecular dynamics simulations. The event chain Monte Carlo algorithm exhibits Rouse dynamics on short time scales. In the absence of swap moves, we find reptation dynamics on intermediate time scales for long chains.
The Monte Carlo Simulation Method for System Reliability and Risk Analysis
Zio, Enrico
2013-01-01
Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling. Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques. This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...
High-speed evaluation of track-structure Monte Carlo electron transport simulations
Pasciak, A. S.; Ford, J. R.
2008-10-01
There are many instances where Monte Carlo simulation using the track-structure method for electron transport is necessary for the accurate analytical computation and estimation of dose and other tally data. Because of the large electron interaction cross-sections and highly anisotropic scattering behavior, the track-structure method requires an enormous amount of computation time. For microdosimetry, radiation biology and other applications involving small site and tally sizes, low electron energies or high-Z/low-Z material interfaces where the track-structure method is preferred, a computational device called a field-programmable gate array (FPGA) is capable of executing track-structure Monte Carlo electron-transport simulations as fast as or faster than a standard computer can complete an identical simulation using the condensed history (CH) technique. In this paper, data from FPGA-based track-structure electron-transport computations are presented for five test cases, from simple slab-style geometries to radiation biology applications involving electrons incident on endosteal bone surface cells. For the most complex test case presented, an FPGA is capable of evaluating track-structure electron-transport problems more than 500 times faster than a standard computer can perform the same track-structure simulation and with comparable accuracy.
High-speed evaluation of track-structure Monte Carlo electron transport simulations.
Pasciak, A S; Ford, J R
2008-10-07
There are many instances where Monte Carlo simulation using the track-structure method for electron transport is necessary for the accurate analytical computation and estimation of dose and other tally data. Because of the large electron interaction cross-sections and highly anisotropic scattering behavior, the track-structure method requires an enormous amount of computation time. For microdosimetry, radiation biology and other applications involving small site and tally sizes, low electron energies or high-Z/low-Z material interfaces where the track-structure method is preferred, a computational device called a field-programmable gate array (FPGA) is capable of executing track-structure Monte Carlo electron-transport simulations as fast as or faster than a standard computer can complete an identical simulation using the condensed history (CH) technique. In this paper, data from FPGA-based track-structure electron-transport computations are presented for five test cases, from simple slab-style geometries to radiation biology applications involving electrons incident on endosteal bone surface cells. For the most complex test case presented, an FPGA is capable of evaluating track-structure electron-transport problems more than 500 times faster than a standard computer can perform the same track-structure simulation and with comparable accuracy.
Chang, C,; Ko, J. W.
2017-01-01
If soundly conducted, risk assessment could yield considerable savings for project investors. Monte Carlo Simulation (MCS) has been widely embraced by risk management guides as an instrumental tool for this purpose. This research aims to develop a new method to improve the rigor of MCS by establishing the link between parameter estimation and assessment of individual risk sources. The method is validated by virtue of its predictive power for the likelihood of a project being successful in sec...
Energy Technology Data Exchange (ETDEWEB)
Yoon, Dokun; Suh, Tae Suk [Catholic Univ. of Korea, Seoul (Korea, Republic of); Hong, Key Jo [Stanford Univ., Stanford (United States)
2014-05-15
The resulting neutron captures in {sup 10}B are used for radiation therapy. The occurrence point of the characteristic 478 keV prompt gamma rays agrees with the neutron capture point. If these prompt gamma rays are detected by external instruments such as a gamma camera or single photon emission computed tomography (SPECT), the therapy region can be monitored during the treatment using images. A feasibility study and analysis of a reconstructed image using many projections (128) were conducted. The optimization of the detection system and a detailed neutron generator simulation were beyond the scope of this study. The possibility of extracting a 3D BNCT-SPECT image was confirmed using the Monte Carlo simulation and OSEM algorithm. The quality of the prompt gamma ray SPECT image obtained from BNCT was evaluated quantitatively using three different boron uptake regions and was shown to depend on the location and size relations. The prospects for obtaining an actual BNCT-SPECT image were also estimated from the quality of the simulated image and the simulation conditions. When multi tumor regions should be treated using the BNCT method, a reasonable model to determine how many useful images can be obtained from SPECT can be provided to the BNCT facilities based on the preceding imaging research. However, because the scope of this research was limited to checking the feasibility of 3D BNCT-SPECT image reconstruction using multiple projections, along with an evaluation of the image, some simulation conditions were taken from previous studies. In the future, a simulation will be conducted that includes optimized conditions for an actual BNCT facility, along with an imaging process for motion correction in BNCT. Although an excessively long simulation time was required to obtain enough events for image reconstruction, the feasibility of acquiring a 3D BNCT-SPECT image using multiple projections was confirmed using a Monte Carlo simulation, and a quantitative image
Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area
Energy Technology Data Exchange (ETDEWEB)
Pratama, Cecep, E-mail: great.pratama@gmail.com [Graduate Program of Earth Science, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Meilano, Irwan [Geodesy Research Division, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Nugraha, Andri Dian [Global Geophysical Group, Faculty of Mining and Petroleum Engineering, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia)
2015-04-24
Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.
Li, Junli; Li, Chunyan; Qiu, Rui; Yan, Congchong; Xie, Wenzhang; Wu, Zhen; Zeng, Zhi; Tung, Chuanjong
2015-09-01
The method of Monte Carlo simulation is a powerful tool to investigate the details of radiation biological damage at the molecular level. In this paper, a Monte Carlo code called NASIC (Nanodosimetry Monte Carlo Simulation Code) was developed. It includes physical module, pre-chemical module, chemical module, geometric module and DNA damage module. The physical module can simulate physical tracks of low-energy electrons in the liquid water event-by-event. More than one set of inelastic cross sections were calculated by applying the dielectric function method of Emfietzoglou's optical-data treatments, with different optical data sets and dispersion models. In the pre-chemical module, the ionised and excited water molecules undergo dissociation processes. In the chemical module, the produced radiolytic chemical species diffuse and react. In the geometric module, an atomic model of 46 chromatin fibres in a spherical nucleus of human lymphocyte was established. In the DNA damage module, the direct damages induced by the energy depositions of the electrons and the indirect damages induced by the radiolytic chemical species were calculated. The parameters should be adjusted to make the simulation results be agreed with the experimental results. In this paper, the influence study of the inelastic cross sections and vibrational excitation reaction on the parameters and the DNA strand break yields were studied. Further work of NASIC is underway. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Monte Carlo estimation of the electric field in stellarators
Bauer, F.; Betancourt, O.; Garabedian, P.; Ng, K. C.
1986-01-01
The BETA computer codes have been developed to study ideal magnetohydrodynamic equilibrium and stability of stellarators and to calculate neoclassical transport for electrons as well as ions by the Monte Carlo method. In this paper a numerical procedure is presented to select resonant terms in the electric potential so that the distribution functions and confinement times of the ions and electrons become indistinguishable. PMID:16593767
Kadoura, Ahmad
2011-06-06
Lennard‐Jones (L‐J) and Buckingham exponential‐6 (exp‐6) potential models were used to produce isotherms for methane at temperatures below and above critical one. Molecular simulation approach, particularly Monte Carlo simulations, were employed to create these isotherms working with both canonical and Gibbs ensembles. Experiments in canonical ensemble with each model were conducted to estimate pressures at a range of temperatures above methane critical temperature. Results were collected and compared to experimental data existing in literature; both models showed an elegant agreement with the experimental data. In parallel, experiments below critical temperature were run in Gibbs ensemble using L‐J model only. Upon comparing results with experimental ones, a good fit was obtained with small deviations. The work was further developed by adding some statistical studies in order to achieve better understanding and interpretation to the estimated quantities by the simulation. Methane phase diagrams were successfully reproduced by an efficient molecular simulation technique with different potential models. This relatively simple demonstration shows how powerful molecular simulation methods could be, hence further applications on more complicated systems are considered. Prediction of phase behavior of elemental sulfur in sour natural gases has been an interesting and challenging field in oil and gas industry. Determination of elemental sulfur solubility conditions helps avoiding all kinds of problems caused by its dissolution in gas production and transportation processes. For this purpose, further enhancement to the methods used is to be considered in order to successfully simulate elemental sulfur phase behavior in sour natural gases mixtures.
A Monte Carlo simulation technique to determine the optimal portfolio
Directory of Open Access Journals (Sweden)
Hassan Ghodrati
2014-03-01
Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.
Monte Carlo simulations of ionization potential depression in dense plasmas
Energy Technology Data Exchange (ETDEWEB)
Stransky, M., E-mail: stransky@fzu.cz [Department of Radiation and Chemical Physics, Institute of Physics ASCR, Na Slovance 2, 182 21 Prague 8 (Czech Republic)
2016-01-15
A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.
Monte Carlo simulation of AB-copolymers with saturating bonds
Chertovich, A V; Khokhlov, A R; Bohr, J
2003-01-01
Structural transitions in a single AB-copolymer chain where saturating bonds can be formed between A-and B-units are studied by means of Monte Carlo computer simulations using the bond fluctuation model. Three transitions are found, coil-globule, coil-hairpin and globule-hairpin, depending on the nature of a particular AB-sequence: statistical random sequence, diblock sequence and 'random-complementary' sequence (one-half of such an AB-sequence is random with Bernoulli statistics while the other half is complementary to the first one). The properties of random-complementary sequences are closer to those of diblock sequences than to the properties of random sequences. The model (although quite rough) is expected to represent some basic features of real RNA molecules, i.e. the formation of secondary structure of RNA due to hydrogen bonding of corresponding bases and stacking interactions of the base pairs in helixes. We introduce the notation of RNA-like copolymers and discuss in what sense the sequences studie...
Phase transitions in chiral magnets from Monte Carlo simulations
Belemuk, A. M.; Stishov, S. M.
2017-06-01
Motivated by the unusual temperature dependence of the specific heat in MnSi, comprising a combination of a sharp first-order feature accompanied by a broad hump, we study the extended Heisenberg model with competing exchange J and anisotropic Dzyaloshinskii-Moriya D interactions in a broad range of ratio D /J . Utilizing classical Monte Carlo simulations we find an evolution of the temperature dependence of the specific heat and magnetic susceptibility with variation of D /J . Combined with an analysis of the Bragg intensity patterns, we clearly demonstrate that the observed puzzling hump in the specific heat of MnSi originates from smearing out of the virtual ferromagnetic second-order phase transition by helical fluctuations which manifest themselves in the transient multiple spiral state. These fluctuations finally condense into the helical ordered phase via a first-order phase transition, as is indicated by the specific heat peak. Thus the model demonstrates a crossover from a second-order to a first-order transition with increasing D /J . Upon further increasing D /J another crossover from a first-order to a second-order transition takes place in the system. Moreover, the results of the calculations clearly indicate that these competing interactions are the primary factors responsible for the appearance of first-order phase transitions in helical magnets with the Dzyaloshinskii-Moriya interaction.
Titration of hydrophobic polyelectrolytes using Monte Carlo simulations
Ulrich, Serge; Laguecir, Abohachem; Stoll, Serge
2005-03-01
The conformation and titration curves of weak (or annealed) hydrophobic polyelectrolytes have been examined using Monte Carlo simulations with screened Coulomb potentials in the grand canonical ensemble. The influence of the ionic concentration pH and presence of hydrophobic interactions has been systematically investigated. A large number of conformations such as extended, pearl-necklace, cigar-shape, and collapsed structures resulting from the subtle balance of short-range hydrophobic attractive interactions and long-range electrostatic repulsive interactions between the monomers have been observed. Titration curves were calculated by adjusting the pH-pK0 values (pK0 represents the intrinsic dissociation constant of an isolated monomer) and then calculating the ionization degree α of the polyelectrolyte. Important transitions related to cascades of conformational changes were observed in the titration curves, mainly at low ionic concentration and with the presence of strong hydrophobic interactions. We demonstrated that the presence of hydrophobic interactions plays an important role in the acid-base properties of a polyelectrolyte in promoting the formation of compact conformations and hence decreasing the polyelectrolyte degree of ionization for a given pH-pK0 value.
A subset multicanonical Monte Carlo method for simulating rare failure events
Chen, Xinjuan; Li, Jinglai
2017-09-01
Estimating failure probabilities of engineering systems is an important problem in many engineering fields. In this work we consider such problems where the failure probability is extremely small (e.g. ≤10-10). In this case, standard Monte Carlo methods are not feasible due to the extraordinarily large number of samples required. To address these problems, we propose an algorithm that combines the main ideas of two very powerful failure probability estimation approaches: the subset simulation (SS) and the multicanonical Monte Carlo (MMC) methods. Unlike the standard MMC which samples in the entire domain of the input parameter in each iteration, the proposed subset MMC algorithm adaptively performs MMC simulations in a subset of the state space, which improves the sampling efficiency. With numerical examples we demonstrate that the proposed method is significantly more efficient than both of the SS and the MMC methods. Moreover, like the standard MMC, the proposed algorithm can reconstruct the complete distribution function of the parameter of interest and thus can provide more information than just the failure probabilities of the systems.
Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods
Directory of Open Access Journals (Sweden)
Lai Bo-Lun
2017-01-01
Full Text Available Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA, which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.
Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods
Lai, Bo-Lun; Sheu, Rong-Jiun
2017-09-01
Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.
Decision Assistance in Risk Assessment – Monte Carlo Simulations
Directory of Open Access Journals (Sweden)
Emil BURTESCU
2012-01-01
Full Text Available High security must be a primary and permanent concern of the leadership of an organization and it must be ensured at any time. For this, a risk analysis is compulsory and imperative to be done during the risk management cycle. Security risk analysis and security risk management components mostly use estimative data during the whole extensive process. The further evolution of the events might not be reflected in the obtained results. If we were to think about the fact that hazard must be modeled, this concern is absolutely normal. Though, we must find a way to model the events that a company is exposed to, events that damage the informational security. In the following lines of this paper we will use the Monte-Carlo method in order to model a set of security parameters that are used in security risk analysis. The frequency of unwanted events, damages and their impact will represent our main focus and will be applied to both the quantitative and qualitative security risk analysis approach. The obtained results will act as a guide for experts to better allocation of resources for decreasing or eliminating the risk and will also represent a warning for the leadership about certain absolutely necessary investments.
Monte Carlo simulation of cyclotron resonant scattering features
Schwarm, Fritz-Walter; Schönherr, Gabriele; Wilms, Joern
In the regime of very high magnetic fields on the order of 10(12} mathrm{G) the electron momenta perpendicular to the field are quantized due to the discrete Landau levels populated by the electrons. Parallel to the magnetic field the electrons form a continuous momentum distribution. The seed photon continuum is generated for example by bremsstrahlung or blackbody radiation. Resonant scattering of a seed photon by an electron may excite the electron to a higher Landau level. The subsequent de-excitation of the excited electron produces additional photons close to the resonance energy. This way complex cyclotron resonant scattering features (CRSFs) are imprinted on the continuum radiation. Due to the continuous electron momentum distribution parallel to the magnetic field the scattering photon's energy and angle are mixed by Lorentz transformation to the electron rest frame in which the resonant scattering process is being carried out. Therefore synthetic spectra of cyclotron lines can not be accurately calculated analytically. CRSFs have been observed in more than a dozen accreting X-ray binaries. They provide much information about the accretion structure in the observed systems since the exact line shape is sensitive to many parameters in the column. Typical parameters are for example the geometry and the spectral properties of seed photon sources, the geometry of the column, or the magnetic field and temperature within the column. We present an overview over the Monte-Carlo approach to cyclotron line simulation and show results from our texttt{cyclosim} code. Furthermore we investigate the influence of the accretion geometry on the cyclotron line shape. Our code enables us to perform fully relativistic simulations including the correct cyclotron scattering cross sections and the possibility to cope with parameter gradients such as magnetic field, temperature, or velocity gradients within the accretion column. Using a Green's function approach these simulations
Monte Carlo simulation on kinetics of batch and semi-batch free radical polymerization
Shao, Jing
2015-10-27
Based on Monte Carlo simulation technology, we proposed a hybrid routine which combines reaction mechanism together with coarse-grained molecular simulation to study the kinetics of free radical polymerization. By comparing with previous experimental and simulation studies, we showed the capability of our Monte Carlo scheme on representing polymerization kinetics in batch and semi-batch processes. Various kinetics information, such as instant monomer conversion, molecular weight, and polydispersity etc. are readily calculated from Monte Carlo simulation. The kinetic constants such as polymerization rate k p is determined in the simulation without of “steady-state” hypothesis. We explored the mechanism for the variation of polymerization kinetics those observed in previous studies, as well as polymerization-induced phase separation. Our Monte Carlo simulation scheme is versatile on studying polymerization kinetics in batch and semi-batch processes.
Directory of Open Access Journals (Sweden)
Timothy J Kyng
2014-03-01
Full Text Available The economic valuation of complex financial contracts is often done using Monte-Carlo simulation. We show how to implement this approach using Excel. We discuss Monte-Carlo evaluation for standard single asset European options and then demonstrate how the basic ideas may be extended to evaluate options with exotic multi-asset multi-period features. Single asset option evaluation becomes a special case. We use a typical Executive Stock Option to motivate the discussion, which we analyse using novel theory developed in our previous works. We demonstrate the simulation of the multivariate normal distribution and the multivariate Log-Normal distribution using the Cholesky Square Root of a covariance matrix for replicating the correlation structure in the multi-asset, multi period simulation required for estimating the economic value of the contract. We do this in the standard Black Scholes framework with constant parameters. Excel implementation provides many pedagogical merits due to its relative transparency and simplicity for students. This approach also has relevance to industry due to the widespread use of Excel by practitioners and for graduates who may desire to work in the finance industry. This allows students to be able to price complex financial contracts for which an analytic approach is intractable.
Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques
Energy Technology Data Exchange (ETDEWEB)
Sharma, Diksha [Division of Imaging, Diagnostics and Software Reliability, Center for Devices and Radiological Health, Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD 20993 (United States); Sze, Christina; Bhandari, Harish; Nagarkar, Vivek [Radiation Monitoring Devices Inc., Watertown, MA (United States); Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov [Division of Imaging, Diagnostics and Software Reliability, Center for Devices and Radiological Health, Food and Drug Administration, 10903 New Hampshire Ave, Silver Spring, MD 20993 (United States)
2017-01-01
Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.
Energy Technology Data Exchange (ETDEWEB)
Radhakrishnan, B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Eisenbach, M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burress, Timothy A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2017-01-24
A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. In conclusion, the transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.
de Mul, F.F.M.; Steenbergen, Wiendelt; Greve, Jan
1999-01-01
Doppler Monte Carlo (DMC) simulations of the transport of light through turbid media, e.g., tissue, can be used to predict or to interpret measurements of the blood perfusion of tissue by laser‐Doppler perfusion flowmetry. We describe the physical and mathematical background of Doppler Monte Carlo
On an efficient multiple time step Monte Carlo simulation of the SABR model
A. Leitao Rodriguez (Álvaro); L.A. Grzelak (Lech Aleksander); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractIn this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl.
On an efficient multiple time step Monte Carlo simulation of the SABR model
Leitao Rodriguez, A.; Grzelak, L.A.; Oosterlee, C.W.
2017-01-01
In this paper, we will present a multiple time step Monte Carlo simulation technique for pricing options under the Stochastic Alpha Beta Rho model. The proposed method is an extension of the one time step Monte Carlo method that we proposed in an accompanying paper Leitao et al. [Appl. Math.
Deficiency in Monte Carlo simulations of coupled neutron-gamma-ray fields
Maleka, Peane P.; Maucec, Marko; de Meijer, Robert J.
2011-01-01
The deficiency in Monte Carlo simulations of coupled neutron-gamma-ray field was investigated by benchmarking two simulation codes with experimental data. Simulations showed better correspondence with the experimental data for gamma-ray transport only. In simulations, the neutron interactions with
Herwig: The Evolution of a Monte Carlo Simulation
CERN. Geneva
2015-01-01
Monte Carlo event generation has seen significant developments in the last 10 years starting with preparation for the LHC and then during the first LHC run. I will discuss the basic ideas behind Monte Carlo event generators and then go on to discuss these developments, focussing on the developments in Herwig(++) event generator. I will conclude by presenting the current status of event generation together with some results of the forthcoming new version of Herwig, Herwig 7.
Modeling low-coherence enhanced backscattering using Monte Carlo simulation.
Subramanian, Hariharan; Pradhan, Prabhakar; Kim, Young L; Liu, Yang; Li, Xu; Backman, Vadim
2006-08-20
Constructive interference between coherent waves traveling time-reversed paths in a random medium gives rise to the enhancement of light scattering observed in directions close to backscattering. This phenomenon is known as enhanced backscattering (EBS). According to diffusion theory, the angular width of an EBS cone is proportional to the ratio of the wavelength of light lambda to the transport mean-free-path length l(s)* of a random medium. In biological media a large l(s)* approximately 0.5-2 mm > lambda results in an extremely small (approximately 0.001 degrees ) angular width of the EBS cone, making the experimental observation of such narrow peaks difficult. Recently, the feasibility of observing EBS under low spatial coherence illumination (spatial coherence length Lsc path lengths and thus resulting in an increase of more than 100 times in the angular width of low coherence EBS (LEBS) cones. However, a conventional diffusion approximation-based model of EBS has not been able to explain such a dramatic increase in LEBS width. We present a photon random walk model of LEBS by using Monte Carlo simulation to elucidate the mechanism accounting for the unprecedented broadening of the LEBS peaks. Typically, the exit angles of the scattered photons are not considered in modeling EBS in the diffusion regime. We show that small exit angles are highly sensitive to low-order scattering, which is crucial for accurate modeling of LEBS. Our results show that the predictions of the model are in excellent agreement with the experimental data.
Monte Carlo computer simulation of sedimentation of charged hard spherocylinders
Energy Technology Data Exchange (ETDEWEB)
Viveros-Méndez, P. X., E-mail: xviveros@fisica.uaz.edu.mx; Aranda-Espinoza, S. [Unidad Académica de Física, Universidad Autónoma de Zacatecas, Calzada Solidaridad esq. Paseo, La Bufa s/n, 98060 Zacatecas, Zacatecas, México (Mexico); Gil-Villegas, Alejandro [Departamento de Ingeniería Física, División de Ciencias e Ingenierías, Campus León, Universidad de Guanajuato, Loma del Bosque 103, Lomas del Campestre, 37150 León, Guanajuato, México (Mexico)
2014-07-28
In this article we present a NVT Monte Carlo computer simulation study of sedimentation of an electroneutral mixture of oppositely charged hard spherocylinders (CHSC) with aspect ratio L/σ = 5, where L and σ are the length and diameter of the cylinder and hemispherical caps, respectively, for each particle. This system is an extension of the restricted primitive model for spherical particles, where L/σ = 0, and it is assumed that the ions are immersed in an structureless solvent, i.e., a continuum with dielectric constant D. The system consisted of N = 2000 particles and the Wolf method was implemented to handle the coulombic interactions of the inhomogeneous system. Results are presented for different values of the strength ratio between the gravitational and electrostatic interactions, Γ = (mgσ)/(e{sup 2}/Dσ), where m is the mass per particle, e is the electron's charge and g is the gravitational acceleration value. A semi-infinite simulation cell was used with dimensions L{sub x} ≈ L{sub y} and L{sub z} = 5L{sub x}, where L{sub x}, L{sub y}, and L{sub z} are the box dimensions in Cartesian coordinates, and the gravitational force acts along the z-direction. Sedimentation effects were studied by looking at every layer formed by the CHSC along the gravitational field. By increasing Γ, particles tend to get more packed at each layer and to arrange in local domains with an orientational ordering along two perpendicular axis, a feature not observed in the uncharged system with the same hard-body geometry. This type of arrangement, known as tetratic phase, has been observed in two-dimensional systems of hard-rectangles and rounded hard-squares. In this way, the coupling of gravitational and electric interactions in the CHSC system induces the arrangement of particles in layers, with the formation of quasi-two dimensional tetratic phases near the surface.
Directory of Open Access Journals (Sweden)
Surendra P. Verma
2014-01-01
Full Text Available Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15 for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ=0 and ε=±1, were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15>N14>N8.
Evaluation of cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Woo, Sang Keun; Kim, Wook; Park, Yong Sung; Kang, Joo Hyun; Lee, Yong Jin [Korea Institute of Radiological and Medical Sciences, KIRAMS, Seoul (Korea, Republic of); Cho, Doo Wan; Lee, Hong Soo; Han, Su Cheol [Jeonbuk Department of Inhalation Research, Korea Institute of toxicology, KRICT, Jeongeup (Korea, Republic of)
2016-12-15
These absorbed dose can calculated using the Monte Carlo transport code MCNP (Monte Carlo N-particle transport code). Internal radiotherapy absorbed dose was calculated using conventional software, such as OLINDA/EXM or Monte Carlo simulation. However, the OLINDA/EXM does not calculate individual absorbed dose and non-standard organ, such as tumor. While the Monte Carlo simulation can calculated non-standard organ and specific absorbed dose using individual CT image. External radiotherapy, absorbed dose can calculated by specific absorbed energy in specific organs using Monte Carlo simulation. The specific absorbed energy in each organ was difference between species or even if the same species. Since they have difference organ sizes, position, and density of organs. The aim of this study was to individually evaluated cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation. We evaluation of cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation. The absorbed energy in each organ compared with mouse heart was 54.6 fold higher than monkey absorbed energy in heart. Likewise lung was 88.4, liver was 16.0, urinary bladder was 29.4 fold higher than monkey. It means that the distance of each organs and organ mass was effects of the absorbed energy. This result may help to can calculated absorbed dose and more accuracy plan for external radiation beam therapy and internal radiotherapy.
Transportation Cost Assessment by Means of a Monte Carlo Simulation in a Transshipment Model
Directory of Open Access Journals (Sweden)
Gordana Dukić
2008-09-01
Full Text Available The task of transport management is to organize the transportof goods from a number of sources to a number of destinationswith minimum total costs. The basic transportation modelassumes direct transport of goods from a source to a destinationwith constant unit transportation costs. In practice, however,goods are frequently transported through several transientpoints where they need to be transshipped. In such circumstancestransport planning and organization become increasinglycomplex. This is especially noticeable in water transport.Most of the issues are directly connected to port operations, asthey are the transshipment hubs. Since transportation is under anumber of influences, in today 's turbulent operating conditionsthe assumption on fixed unit transportation costs cannot betaken as realistic. In order to improve decision making in thetransportation domain, this paper will present a stochastictransshipment model in which cost estimate is based on MonteCarlo simulation. Simulated values of unit costs are used to devisean adequate linear programming model, the solving ofwhich determines the values of total minimum transportationcosts. After repeating the simulation for a sufficient number oftimes, the distribution of total minimum costs can be formed,which is the basis for the pertinent confidence interval estimation.It follows that the design, testing and application of thepresented model requires a combination of quantitative optimizationmethods, simulation and elements of inferential statistics,all with the support of computer and adequate software.
On-the-fly nuclear data processing methods for Monte Carlo simulations of fast spectrum systems
Energy Technology Data Exchange (ETDEWEB)
Walsh, Jon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-08-31
The presentation summarizes work performed over summer 2015 related to Monte Carlo simulations. A flexible probability table interpolation scheme has been implemented and tested with results comparing favorably to the continuous phase-space on-the-fly approach.
Monte Carlo simulation of diffuse attenuation coefficient in presence of non uniform profiles
Digital Repository Service at National Institute of Oceanography (India)
Desa, E.S.; Desai, R.G.P.; Desa, B.A.E.
This paper presents a Monte Carlo simulation of the vertical depth structure of the downward attenuation coefficient (K sub(d)), and the irradiance reflectance (R) for a given profile of chlorophyll. The results are in quantitaive agreement...
Alerstam, Erik; Svensson, Tomas; Andersson-Engels, Stefan
2008-01-01
General-purpose computing on graphics processing units (GPGPU) is shown to dramatically increase the speed of Monte Carlo simulations of photon migration. In a standard simulation of time-resolved photon migration in a semi-infinite geometry, the proposed methodology executed on a low-cost graphics processing unit (GPU) is a factor 1000 faster than simulation performed on a single standard processor. In addition, we address important technical aspects of GPU-based simulations of photon migration. The technique is expected to become a standard method in Monte Carlo simulations of photon migration.
Resolution and intensity in neutron spectrometry determined by Monte Carlo simulation
DEFF Research Database (Denmark)
Dietrich, O.W.
1968-01-01
The Monte Carlo simulation technique was applied to the propagation of Bragg-reflected neutrons in mosaic single crystals. The method proved to be very useful for the determination of resolution and intensity in neutron spectrometers.......The Monte Carlo simulation technique was applied to the propagation of Bragg-reflected neutrons in mosaic single crystals. The method proved to be very useful for the determination of resolution and intensity in neutron spectrometers....
Instantons in Quantum Annealing: Thermally Assisted Tunneling Vs Quantum Monte Carlo Simulations
Jiang, Zhang; Smelyanskiy, Vadim N.; Boixo, Sergio; Isakov, Sergei V.; Neven, Hartmut; Mazzola, Guglielmo; Troyer, Matthias
2015-01-01
Recent numerical result (arXiv:1512.02206) from Google suggested that the D-Wave quantum annealer may have an asymptotic speed-up than simulated annealing, however, the asymptotic advantage disappears when it is compared to quantum Monte Carlo (a classical algorithm despite its name). We show analytically that the asymptotic scaling of quantum tunneling is exactly the same as the escape rate in quantum Monte Carlo for a class of problems. Thus, the Google result might be explained in our framework. We also found that the transition state in quantum Monte Carlo corresponds to the instanton solution in quantum tunneling problems, which is observed in numerical simulations.
Monte Carlo simulations of the stability of delta-Pu
DEFF Research Database (Denmark)
Landa, A.; Soderlind, P.; Ruban, Andrei
2003-01-01
The transition temperature (T-c) for delta-Pu has been calculated for the first time. A Monte Carlo method is employed for this purpose and the effective cluster interactions are obtained from first-principles calculations incorporated with the Connolly-Williams and generalized perturbation methods...
The quantile regression approach to efficiency measurement: insights from Monte Carlo simulations.
Liu, Chunping; Laporte, Audrey; Ferguson, Brian S
2008-09-01
In the health economics literature there is an ongoing debate over approaches used to estimate the efficiency of health systems at various levels, from the level of the individual hospital - or nursing home - up to that of the health system as a whole. The two most widely used approaches to evaluating the efficiency with which various units deliver care are non-parametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Productivity researchers tend to have very strong preferences over which methodology to use for efficiency estimation. In this paper, we use Monte Carlo simulation to compare the performance of DEA and SFA in terms of their ability to accurately estimate efficiency. We also evaluate quantile regression as a potential alternative approach. A Cobb-Douglas production function, random error terms and a technical inefficiency term with different distributions are used to calculate the observed output. The results, based on these experiments, suggest that neither DEA nor SFA can be regarded as clearly dominant, and that, depending on the quantile estimated, the quantile regression approach may be a useful addition to the armamentarium of methods for estimating technical efficiency.
Estimating the parameters of dynamical systems from Big Data using Sequential Monte Carlo samplers
Green, P. L.; Maskell, S.
2017-09-01
In this paper the authors present a method which facilitates computationally efficient parameter estimation of dynamical systems from a continuously growing set of measurement data. It is shown that the proposed method, which utilises Sequential Monte Carlo samplers, is guaranteed to be fully parallelisable (in contrast to Markov chain Monte Carlo methods) and can be applied to a wide variety of scenarios within structural dynamics. Its ability to allow convergence of one's parameter estimates, as more data is analysed, sets it apart from other sequential methods (such as the particle filter).
Comparison of nonstationary generalized logistic models based on Monte Carlo simulation
Directory of Open Access Journals (Sweden)
S. Kim
2015-06-01
Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.
Directory of Open Access Journals (Sweden)
Kinsara Abdul Raheem
2014-01-01
Full Text Available Monte Carlo simulations and dose measurements were performed for radionuclides in the whole body and trunks of different sizes in order to estimate external radiation whole body doses from patients administered with radiopharmaceuticals. Calculations were performed on cylindrical water phantoms whose height was 176 cm and for three body diameters: of 24 cm, 30 cm, and 36 cm. The investigated radionuclides were: 99mTc, 131I, 23I, 67Ga, 201Tl, and 111In. Measured and MCNP-calculated values were 2-6 times lower than the values calculated by the point source method. Additionaly, the total dose received by the public until a radionuclide is completely disintegrated was calculated. The other purpose of this work is to provide data on whole body and finger occupational doses received by technologists working in nuclear medicine. Data showed a wide variation in doses that depended on the individual technologist and the position of the dosimeter.
Application of the queueing theory with Monte Carlo simulation to inhalation toxicology.
Wu, G
1998-05-01
Various models have been developed in modelling of inhalation toxicology. The deterministic approach, which has been used to date in most of the models, needs to consider numerous factors, e.g. anatomical structure, breathing frequency, humidity, metabolism rate, partition coefficients, pulmonary ventilation, perfusion rates, unidirectional/cyclic air flow, non-steady-state, steady-state, etc. In the present study, a stochastic approach was used in the modelling of inhalation toxicology, because there is a phenomenological analogy between the queueing system and respiratory system dealing with inhaled toxicants. Using the queueing theory, the amounts of toxicants in the respiratory system, the time needed to remove the accumulated amounts of toxicants from the respiratory system, etc. can be estimated. The Monte Carlo simulation of queueing process was performed to analyse cigarette smoking, and shows the potential use of the queueing theory in inhalation toxicology.
Comparison of nonstationary generalized logistic models based on Monte Carlo simulation
Kim, S.; Nam, W.; Ahn, H.; Kim, T.; Heo, J.-H.
2015-06-01
Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold) data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.
Monk, S D; Selwood, M
2011-03-01
A Monte Carlo-based simulation of the transport of a series of monoenergetic neutron sources through first a rectangular block of 0.93 g cm(-3) density polyethylene and secondly through a sphere made of the same substance is presented here. In both instances, the neutron fields are monitored at closely spread intervals through the moderator mass, producing a lot of data in the process. To reduce the amount of data presented, a figure of merit is created by estimating the cross section for each discrete neutron energy and by applying this to the number of neutrons present of each energy giving an arbitrary response figure. This work was undertaken in order to aid the design and development of a novel neutron spectrometer.
Directory of Open Access Journals (Sweden)
Jeffrey A. Walker
2016-10-01
Full Text Available Background Self-contained tests estimate and test the association between a phenotype and mean expression level in a gene set defined a priori. Many self-contained gene set analysis methods have been developed but the performance of these methods for phenotypes that are continuous rather than discrete and with multiple nuisance covariates has not been well studied. Here, I use Monte Carlo simulation to evaluate the performance of both novel and previously published (and readily available via R methods for inferring effects of a continuous predictor on mean expression in the presence of nuisance covariates. The motivating data are a high-profile dataset which was used to show opposing effects of hedonic and eudaimonic well-being (or happiness on the mean expression level of a set of genes that has been correlated with social adversity (the CTRA gene set. The original analysis of these data used a linear model (GLS of fixed effects with correlated error to infer effects of Hedonia and Eudaimonia on mean CTRA expression. Methods The standardized effects of Hedonia and Eudaimonia on CTRA gene set expression estimated by GLS were compared to estimates using multivariate (OLS linear models and generalized estimating equation (GEE models. The OLS estimates were tested using O’Brien’s OLS test, Anderson’s permutation ${r}_{F}^{2}$ r F 2 -test, two permutation F-tests (including GlobalAncova, and a rotation z-test (Roast. The GEE estimates were tested using a Wald test with robust standard errors. The performance (Type I, II, S, and M errors of all tests was investigated using a Monte Carlo simulation of data explicitly modeled on the re-analyzed dataset. Results GLS estimates are inconsistent between data sets, and, in each dataset, at least one coefficient is large and highly statistically significant. By contrast, effects estimated by OLS or GEE are very small, especially relative to the standard errors. Bootstrap and permutation GLS
Energy Technology Data Exchange (ETDEWEB)
Moskvin, Vadim [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)]. E-mail: vmoskvin@iupui.edu; DesRosiers, Colleen; Papiez, Lech; Timmerman, Robert; Randall, Marcus; DesRosiers, Paul [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)
2002-06-21
The Monte Carlo code PENELOPE has been used to simulate photon flux from the Leksell Gamma Knife, a precision method for treating intracranial lesions. Radiation from a single {sup 60}Co assembly traversing the collimator system was simulated, and phase space distributions at the output surface of the helmet for photons and electrons were calculated. The characteristics describing the emitted final beam were used to build a two-stage Monte Carlo simulation of irradiation of a target. A dose field inside a standard spherical polystyrene phantom, usually used for Gamma Knife dosimetry, has been computed and compared with experimental results, with calculations performed by other authors with the use of the EGS4 Monte Carlo code, and data provided by the treatment planning system Gamma Plan. Good agreement was found between these data and results of simulations in homogeneous media. Owing to this established accuracy, PENELOPE is suitable for simulating problems relevant to stereotactic radiosurgery. (author)
A method based on Monte Carlo simulation for the determination of the G(E) function.
Chen, Wei; Feng, Tiancheng; Liu, Jun; Su, Chuanying; Tian, Yanjie
2015-02-01
The G(E) function method is a spectrometric method for the exposure dose estimation; this paper describes a method based on Monte Carlo method to determine the G(E) function of a 4″ × 4″ × 16″ NaI(Tl) detector. Simulated spectrums of various monoenergetic gamma rays in the region of 40 -3200 keV and the corresponding deposited energy in an air ball in the energy region of full-energy peak were obtained using Monte Carlo N-particle Transport Code. Absorbed dose rate in air was obtained according to the deposited energy and divided by counts of corresponding full-energy peak to get the G(E) function value at energy E in spectra. Curve-fitting software 1st0pt was used to determine coefficients of the G(E) function. Experimental results show that the calculated dose rates using the G(E) function determined by the authors' method are accordant well with those values obtained by ionisation chamber, with a maximum deviation of 6.31 %. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Monte Carlo simulations of the Galileo energetic particle detector
Jun, I; Garrett, H B; McEntire, R W
2002-01-01
Monte Carlo radiation transport studies have been performed for the Galileo spacecraft energetic particle detector (EPD) in order to study its response to energetic electrons and protons. Three-dimensional Monte Carlo radiation transport codes, MCNP version 4B (for electrons) and MCNPX version 2.2.3 (for protons), were used throughout the study. The results are presented in the form of 'geometric factors' for the high-energy channels studied in this paper: B1, DC2, and DC3 for electrons and B0, DC0, and DC1 for protons. The geometric factor is the energy-dependent detector response function that relates the incident particle fluxes to instrument count rates. The trend of actual data measured by the EPD was successfully reproduced using the geometric factors obtained in this study.
Observation of Jet Photoproduction and Comparison to Monte Carlo Simulation.
Lincoln, Donald W.
The photon is the carrier of the electromagnetic force. However in addition to its well known nature, the theories of QCD and quantum mechanics would indicate that the photon can also for brief periods of time split into a qoverline{q} pair (an extended photon). How these constituents share energy and momentum is an interesting question and such a measurement was investigated by scattering photons off protons. The post collision kinematics should reveal pre-collision information. Unfortunately, when these constituents exit the collision point, they undergo subsequent interactions (gluon radiation, fragmentation, etc.) which scramble their kinematics. An algorithm was explored which was shown via Monte Carlo techniques to partially disentangle these post collision interactions and reveal the collision kinematics. The presence or absence of large transverse momenta internal (k_ bot) to the photon has a significant impact on the ability to reconstruct the kinematics of the leading order calculation hard scatter system. Reconstruction of the next to leading order high E_bot partons is more straightforward. Since the photon exhibits this unusual behavior only part of the time, many of the collisions recorded will be with a non-extended (or direct) photon. Unless a method for culling only the extended photons out can be invented, this contamination of direct photons must be accounted for. No such culling method is currently known, and so any measurement will necessarily contain both photon types. Theoretical predictions using Monte Carlo methods are compared with the data and are found to reproduce many experimentally measured distributions quite well. Overall the LUND Monte Carlo reproduces the data better than the HERWIG Monte Carlo. As expected at low jet E_ bot, the data set seems to be dominated by extended photons, with the mix becoming nearly equal at Jet E_bot > 4 GeV. The existence of a large photon k_ bot appears to be favored.
Observation of Jet Photoproduction and Comparison to Monte Carlo Simulation
Energy Technology Data Exchange (ETDEWEB)
Lincoln, Donald W. [Rice Univ., Houston, TX (United States)
1994-01-01
The photon is the carrier of the electromagnetic force. However in addition to its well known nature, the theories of QCD and quantum mechanics would indicate that the photon can also for brief periods of time split into a $q\\bar{q}$ pair (an extended photon.) How these constituents share energy and momentum is an interesting question and such a measurement was investigated by scattering photons off protons. The post collision kinematics should reveal pre-collision information. Unfortunately, when these constituents exit the collision point, they undergo subsequent interactions (gluon radiation, fragmentation, etc.) which scramble their kinematics. An algorithm was explored which was shown via Monte Carlo techniques to partially disentangle these post collision interactions and reveal the collision kinematics. The presence or absence of large transverse momenta internal ($k_\\perp$) to the photon has a significant impact on the ability to reconstruct the kinematics of the leading order calculation hard scatter system. Reconstruction of the next to leading order high $E_\\perp$ partons is more straightforward. Since the photon exhibits this unusual behavior only part of the time, many of the collisions recorded will be with a non-extended (or direct) photon. Unless a method for culling only the extended photons out can be invented, this contamination of direct photons must be accounted for. No such culling method is currently known, and so any measurement will necessarily contain both photon types. Theoretical predictions using Monte Carlo methods are compared with the data and are found to reproduce many experimentally measured distributions quite well. Overall the LUND Monte Carlo reproduces the data better than the HERWIG Monte Carlo. As expected at low jet $E_\\perp$, the data set seems to be dominated by extended photons, with the mix becoming nearly equal at jet $E_\\perp > 4$ GeV. The existence of a large photon $k_\\perp$ appears to be favored.
Monte Carlo simulation of NSE at reactor and spallation sources
Energy Technology Data Exchange (ETDEWEB)
Zsigmond, G.; Wechsler, D.; Mezei, F. [Hahn-Meitner-Institut Berlin, Berlin (Germany)
2001-03-01
A MC (Monte Carlo) computation study of NSE (Neutron Spin Echo) has been performed by means of VITESS investigating the classic and TOF-NSE options at spallation sources. The use of white beams in TOF-NSE makes the flipper efficiency in function of the neutron wavelength an important issue. The emphasis was put on exact evaluation of flipper efficiencies for wide wavelength-band instruments. (author)
Zaidi, H
1999-01-01
the many applications of Monte Carlo modelling in nuclear medicine imaging make it desirable to increase the accuracy and computational speed of Monte Carlo codes. The accuracy of Monte Carlo simulations strongly depends on the accuracy in the probability functions and thus on the cross section libraries used for photon transport calculations. A comparison between different photon cross section libraries and parametrizations implemented in Monte Carlo simulation packages developed for positron emission tomography and the most recent Evaluated Photon Data Library (EPDL97) developed by the Lawrence Livermore National Laboratory was performed for several human tissues and common detector materials for energies from 1 keV to 1 MeV. Different photon cross section libraries and parametrizations show quite large variations as compared to the EPDL97 coefficients. This latter library is more accurate and was carefully designed in the form of look-up tables providing efficient data storage, access, and management. Toge...
Bayesian estimates of equation system parameters, An application of integration by Monte Carlo
T. Kloek (Teun); H.K. van Dijk (Herman)
1978-01-01
textabstractMonte Carlo (MC) is used to draw parameter values from a distribution defined on the structural parameter space of an equation system. Making use of the prior density, the likelihood, and Bayes' Theorem it is possible to estimate posterior moments of both structural and reduced form
Zhu, Caigang; Liu, Quan
2012-01-01
We present a hybrid method that combines a multilayered scaling method and a perturbation method to speed up the Monte Carlo simulation of diffuse reflectance from a multilayered tissue model with finite-size tumor-like heterogeneities. The proposed method consists of two steps. In the first step, a set of photon trajectory information generated from a baseline Monte Carlo simulation is utilized to scale the exit weight and exit distance of survival photons for the multilayered tissue model. In the second step, another set of photon trajectory information, including the locations of all collision events from the baseline simulation and the scaling result obtained from the first step, is employed by the perturbation Monte Carlo method to estimate diffuse reflectance from the multilayered tissue model with tumor-like heterogeneities. Our method is demonstrated to shorten simulation time by several orders of magnitude. Moreover, this hybrid method works for a larger range of probe configurations and tumor models than the scaling method or the perturbation method alone.
Energy Technology Data Exchange (ETDEWEB)
NONE
2001-01-01
In the report, research results discussed in 1999 fiscal year at Nuclear Code Evaluation Committee of Nuclear Code Research Committee were summarized. Present status of Monte Carlo simulation on nuclear energy study was described. Especially, besides of criticality, shielding and core analyses, present status of applications to risk and radiation damage analyses, high energy transport and nuclear theory calculations of Monte Carlo Method was described. The 18 papers are indexed individually. (J.P.N.)
Monte Carlo Simulations of Random Frustrated Systems on Graphics Processing Units
Feng, Sheng; Fang, Ye; Hall, Sean; Papke, Ariane; Thomasson, Cade; Tam, Ka-Ming; Moreno, Juana; Jarrell, Mark
2012-02-01
We study the implementation of the classical Monte Carlo simulation for random frustrated models using the multithreaded computing environment provided by the the Compute Unified Device Architecture (CUDA) on modern Graphics Processing Units (GPU) with hundreds of cores and high memory bandwidth. The key for optimizing the performance of the GPU computing is in the proper handling of the data structure. Utilizing the multi-spin coding, we obtain an efficient GPU implementation of the parallel tempering Monte Carlo simulation for the Edwards-Anderson spin glass model. In the typical simulations, we find over two thousand times of speed-up over the single threaded CPU implementation.
Wang, Lilie
In ionization chamber radiation dosimetry, the introduction of the ion chamber into medium will unavoidably distort the radiation field near the chamber because the chamber cavity material (air) is different from the medium. A replacement correction factor, Prepl was introduced in order to correct the chamber readings to give an accurate radiation dose in the medium without the presence of the chamber. Generally it is very hard to measure the values of Prepl since they are intertwined with the chamber wall effect. In addition, the P repl values always come together with the stopping-power ratio of the two media involved. This makes the problem of determining the P repl values even more complicated. Monte Carlo simulation is an ideal method to investigate the replacement correction factors. In this study, four different methods of calculating the values of Prepl by Monte Carlo simulation are discussed. Two of the methods are designated as 'direct' methods in the sense that the evaluation of the stopping-power ratio is not necessary. The systematic uncertainties of the two direct methods are estimated to be about 0.1-0.2% which comes from the ambiguous definition of the energy cutoff Delta used in the Spencer-Attix cavity theory. The two direct methods are used to calculate the values of P repl for both plane-parallel chambers and cylindrical thimble chambers in either electron beams or photon beams. The calculation results are compared to measurements. For electron beams, good agreements are obtained. For thimble chambers in photon beams, significant discrepancies are observed between calculations and measurements. The experiments are thus investigated and the procedures are simulated by the Monte Carlo method. It is found that the interpretation of the measured data as the replacement correction factors in dosimetry protocols are not correct. In applying the calculation to the BIPM graphite chamber in a 60Co beam, the calculated values of P repl differ from those
Uncertainty Propagation Analysis for the Monte Carlo Time-Dependent Simulations
Energy Technology Data Exchange (ETDEWEB)
Shaukata, Nadeem; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)
2015-10-15
In this paper, a conventional method to control the neutron population for super-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. A time-dependent simulation mode has also been implemented in the development version of SERPENT 2 Monte Carlo code. In this mode, sequential population control mechanism has been proposed for modeling of prompt super-critical systems. A Monte Carlo method has been properly used in TART code for dynamic criticality calculations. For super-critical systems, the neutron population is allowed to grow over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, conventional time-dependent Monte Carlo (TDMC) algorithm is implemented. There is an exponential growth of neutron population in estimation of neutron density tally for super-critical systems and the number of neutrons being tracked exceed the memory of the computer. In order to control this exponential growth at the end of each time boundary, a conventional time cut-off controlling population strategy is included in TDMC. A scale factor is introduced to tally the desired neutron density at the end of each time boundary. The main purpose of this paper is the quantification of uncertainty propagation in neutron densities at the end of each time boundary for super-critical systems. This uncertainty is caused by the uncertainty resulting from the introduction of scale factor. The effectiveness of TDMC is examined for one-group infinite homogeneous problem (the rod model) and two-group infinite homogeneous problem. The desired neutron density is tallied by the introduction of
Maucec, M.; Rigollet, C.
The performance of a detection system based on the pulsed fast/thermal neutron analysis technique was assessed using Monte Carlo simulations. The aim was to develop and implement simulation methods, to support and advance the data analysis techniques of the characteristic gamma-ray spectra,
Versluis, R.; Dorsman, R.; Thielen, L.; Roos, M.E.
2009-01-01
A new approach for performing numerical direct simulation Monte Carlo (DSMC) simulations on turbomolecular pumps in the free molecular and transitional flow regimes is described. The chosen approach is to use surfaces that move relative to the grid to model the effect of rotors and stators on a gas
Monte-Carlo Tree Search for Simulated Car Racing
DEFF Research Database (Denmark)
Fischer, Jacob; Falsted, Nikolaj; Vielwerth, Mathias
2015-01-01
Monte Carlo Tree Search (MCTS) has recently seen considerable success in playing certain types of games, most of which are discrete, fully observable zero-sum games. Consequently there is currently considerable interest within the research community in investigating what other games this algorithm...... of the action space. This combination allows the controller to effectively search the tree of potential future states. Results show that it is indeed possible to implement a competent MCTS-based racing controller. The controller generalizes to most road tracks as long as a warm-up period is provided....
Sandberg, Mattias
2015-01-07
The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with log normal distributed diffusion coefficients, e.g. modelling ground water flow. Typical models use log normal diffusion coefficients with H¨older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. This talk will address how the total error can be estimated by the computable error.
Hall, Eric
2016-01-09
The Monte Carlo (and Multi-level Monte Carlo) finite element method can be used to approximate observables of solutions to diffusion equations with lognormal distributed diffusion coefficients, e.g. modeling ground water flow. Typical models use lognormal diffusion coefficients with H´ older regularity of order up to 1/2 a.s. This low regularity implies that the high frequency finite element approximation error (i.e. the error from frequencies larger than the mesh frequency) is not negligible and can be larger than the computable low frequency error. We address how the total error can be estimated by the computable error.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
Modeling the Biophysical Effects in a Carbon Beam Delivery Line using Monte Carlo Simulation
Cho, Ilsung; Cho, Sungho; Kim, Eun Ho; Song, Yongkeun; Shin, Jae-ik; Jung, Won-Gyun
2016-01-01
Relative biological effectiveness (RBE) plays an important role in designing a uniform dose response for ion beam therapy. In this study the biological effectiveness of a carbon ion beam delivery system was investigated using Monte Carlo simulation. A carbon ion beam delivery line was designed for the Korea Heavy Ion Medical Accelerator (KHIMA) project. The GEANT4 simulation tool kit was used to simulate carbon beam transporting into media. An incident energy carbon ion beam in the range between 220 MeV/u and 290 MeV/u was chosen to generate secondary particles. The microdosimetric-kinetic (MK) model is applied to describe the RBE of 10% survival in human salivary gland (HSG) cells. The RBE weighted dose was estimated as a function of the penetrating depth of the water phantom along the incident beam direction. A biologically photon-equivalent Spread Out Bragg Peak (SOBP) was designed using the RBE weighted absorbed dose. Finally, the RBE of mixed beams was predicted as a function of the water phantom depth.
Uncertainty assessment in geodetic network adjustment by combining GUM and Monte-Carlo-simulations
Niemeier, Wolfgang; Tengen, Dieter
2017-06-01
In this article first ideas are presented to extend the classical concept of geodetic network adjustment by introducing a new method for uncertainty assessment as two-step analysis. In the first step the raw data and possible influencing factors are analyzed using uncertainty modeling according to GUM (Guidelines to the Expression of Uncertainty in Measurements). This approach is well established in metrology, but rarely adapted within Geodesy. The second step consists of Monte-Carlo-Simulations (MC-simulations) for the complete processing chain from raw input data and pre-processing to adjustment computations and quality assessment. To perform these simulations, possible realizations of raw data and the influencing factors are generated, using probability distributions for all variables and the established concept of pseudo-random number generators. Final result is a point cloud which represents the uncertainty of the estimated coordinates; a confidence region can be assigned to these point clouds, as well. This concept may replace the common concept of variance propagation and the quality assessment of adjustment parameters by using their covariance matrix. It allows a new way for uncertainty assessment in accordance with the GUM concept for uncertainty modelling and propagation. As practical example the local tie network in "Metsähovi Fundamental Station", Finland is used, where classical geodetic observations are combined with GNSS data.
Energy Technology Data Exchange (ETDEWEB)
Radhakrishnan, B., E-mail: radhakrishnb@ornl.gov; Eisenbach, M.; Burress, T.A.
2017-06-15
Highlights: • Developed new scaling technique for dipole–dipole interaction energy. • Developed new scaling technique for exchange interaction energy. • Used scaling laws to extend atomistic simulations to micrometer length scale. • Demonstrated transition from mono-domain to vortex magnetic structure. • Simulated domain wall width and transition length scale agree with experiments. - Abstract: A new scaling approach has been proposed for the spin exchange and the dipole–dipole interaction energy as a function of the system size. The computed scaling laws are used in atomistic Monte Carlo simulations of magnetic moment evolution to predict the transition from single domain to a vortex structure as the system size increases. The width of a 180° – domain wall extracted from the simulated structures is in close agreement with experimentally values for an F–Si alloy. The transition size from a single domain to a vortex structure is also in close agreement with theoretically predicted and experimentally measured values for Fe.
Zhang, D.; Cagnon, CH; Villablanca, JP; McCollough, CH; Cody, DD; Zankl, M.; Demarco, JJ; McNitt-Gray, MF
2013-01-01
Purpose: CT neuroperfusion examinations are capable of delivering high radiation dose to the skin or lens of the eyes of a patient and can possibly cause deterministic radiation injury. The purpose of this study is to: (a) estimate peak skin dose and eye lens dose from CT neuroperfusion examinations based on several voxelized adult patient models of different head size and (b) investigate how well those doses can be approximated by some commonly used CT dose metrics or tools, such as CTDI vol...
S-factor calculations for mouse models using Monte-Carlo simulations.
Bitar, A; Lisbona, A; Bardiès, M
2007-12-01
Targeted radionuclide therapy applications require the use of small animals for preclinical experiments. Accurate dose estimation is needed in such animals to explore and analyze the toxicity of injected radiopharmaceuticals. We developed two numerical models to allow for a more accurate mouse dosimetry. A frozen nude mouse (30 g) was sliced and digital photographs were taken during the operation. More than 30 organs and tissues were identified and manually segmented. A digital (voxel-based) and a mathematical model were constructed from the segmented images. Important organs were simulated as radiation sources using the Monte-Carlo code MCNP4C. Mono-energetic photons from 0.005 to 2 MeV, and monoenergetic electrons from 0.1 to 2.5 MeV were simulated. Activity was supposed to be uniform in all source organs. Results from monoenergetic emissions were integrated over emission spectra. Radionuclide S-factors (Gy/Bq.s) were calculated by taking into account both electron and photon contributions. A comparison of the results obtained with either a voxel-based or mathematical model was carried out. The voxel-based model was then used to revise dosimetric results, obtained previously under the assumption that all emitted energy was absorbed locally. For (188)Re, the self-absorbed doses in xenografted tumors were 39-69% lower than that obtained by assuming local energy deposition. The voxel-based models represent more realistic anatomic approach. The rapid advancement of computer science and new features added to Monte-Carlo codes permit considerable reduction of computational run time. Cross-doses should not be neglected when medium to high energy beta emitters are being used for preclinical experiments on mice.
Yuan, L G; Tang, Y Z; Zhang, Y X; Sun, J; Luo, X Y; Zhu, L X; Zhang, Z; Wang, R; Liu, Y H
2015-08-01
To estimate the valnemulin pharmacokinetic profile in a swine population and to assess a dosage regimen for increasing the likelihood of optimization. This study was, respectively, performed in 22 sows culled by p.o. administration and in 80 growing-finishing pigs by i.v. administration at a single dose of 10 mg/kg to develop a population pharmacokinetic model and Monte Carlo simulation. The relationships among the plasma concentration, dose, and time of valnemulin in pigs were illustrated as C(i,v) = X(0 )(8.4191 × 10(-4) × e(-0.2371t) + 1.2788 × 10(-5) × e(-0.0069t)) after i.v. and C(p.o) = X(0) (-8.4964 × 10(-4) × e(-0.5840t) + 8.4195 × e(-0.2371t) + 7.6869 × 10(-6) × e(-0.0069t)) after p.o. Monte Carlo simulation showed that T(>MIC) was more than 24 h when a single daily dosage at 13.5 mg/kg BW in pigs was administrated by p.o., and MIC was 0.031 mg/L. It was concluded that the current dosage regimen at 10-12 mg/kg BW led to valnemulin underexposure if the MIC was more than 0.031 mg/L and could increase the risk of treatment failure and/or drug resistance. © 2015 John Wiley & Sons Ltd.
Ward, Adam S.; Kelleher, Christa A.; Mason, Seth J. K.; Wagener, Thorsten; McIntyre, Neil; McGlynn, Brian L.; Runkel, Robert L.; Payn, Robert A.
2017-01-01
Researchers and practitioners alike often need to understand and characterize how water and solutes move through a stream in terms of the relative importance of in-stream and near-stream storage and transport processes. In-channel and subsurface storage processes are highly variable in space and time and difficult to measure. Storage estimates are commonly obtained using transient-storage models (TSMs) of the experimentally obtained solute-tracer test data. The TSM equations represent key transport and storage processes with a suite of numerical parameters. Parameter values are estimated via inverse modeling, in which parameter values are iteratively changed until model simulations closely match observed solute-tracer data. Several investigators have shown that TSM parameter estimates can be highly uncertain. When this is the case, parameter values cannot be used reliably to interpret stream-reach functioning. However, authors of most TSM studies do not evaluate or report parameter certainty. Here, we present a software tool linked to the One-dimensional Transport with Inflow and Storage (OTIS) model that enables researchers to conduct uncertainty analyses via Monte-Carlo parameter sampling and to visualize uncertainty and sensitivity results. We demonstrate application of our tool to 2 case studies and compare our results to output obtained from more traditional implementation of the OTIS model. We conclude by suggesting best practices for transient-storage modeling and recommend that future applications of TSMs include assessments of parameter certainty to support comparisons and more reliable interpretations of transport processes.
Directory of Open Access Journals (Sweden)
Daniel Cancelli Romero
2017-10-01
Full Text Available ABSTRACT Analytical results are widely used to assess batch-by-batch conformity, pharmaceutical equivalence, as well as in the development of drug products. Despite this, few papers describing the measurement uncertainty estimation associated with these results were found in the literature. Here, we described a simple procedure used for estimating measurement uncertainty associated with the dissolution test of acetaminophen tablets. A fractionate factorial design was used to define a mathematical model that explains the amount of acetaminophen dissolved (% as a function of time of dissolution (from 20 to 40 minutes, volume of dissolution media (from 800 to 1000 mL, pH of dissolution media (from 2.0 to 6.8, and rotation speed (from 40 to 60 rpm. Using Monte Carlo simulations, we estimated measurement uncertainty for dissolution test of acetaminophen tablets (95.2 ± 1.0%, with a 95% confidence level. Rotation speed was the most important source of uncertainty, contributing about 96.2% of overall uncertainty. Finally, it is important to note that the uncertainty calculated in this paper reflects the expected uncertainty to the dissolution test, and does not consider variations in the content of acetaminophen.
Reliability Assessment of Ultrasonic Nondestructive Inspection Data Using Monte Carlo Simulation
Park, Ik-Keun; Kim, Hyun-Mook
2003-03-01
Ultrasonic NDE is one of important technologies in the life-time maintenance of nuclear power plant. Ultrasonic inspection system is consisted of the operator, equipment and procedure. The reliability of ultrasonic inspection system is affected by its ability. The performance demonstration round robin was conducted to quantify the capability of ultrasonic inspection for in-service. The small number of teams who employed procedures that met or exceeded ASME Sec. XI Code requirements detected the piping of nuclear power plant with various cracks to evaluate the capability of detection and sizing. In this paper, the statistical reliability assessment of ultrasonic nondestructive inspection data using Monte Carlo simulation is presented. The results of the probability of detection (POD) analysis using Monte Carlo simulation are compared to these of logistic probability model. In these results, Monte Carlo simulation was found to be very useful to the reliability assessment for the small NDE hit/miss data sets.
PENELOPE, an algorithm and computer code for Monte Carlo simulation of electron-photon showers
Energy Technology Data Exchange (ETDEWEB)
Salvat, F.; Fernandez-Varea, J.M.; Baro, J.; Sempau, J.
1996-07-01
The FORTRAN 77 subroutine package PENELOPE performs Monte Carlo simulation of electron-photon showers in arbitrary for a wide energy range, from 1 keV to several hundred MeV. Photon transport is simulated by means of the standard, detailed simulation scheme. Electron and positron histories are generated on the basis of a mixed procedure, which combines detailed simulation of hard events with condensed simulation of soft interactions. A simple geometry package permits the generation of random electron-photon showers in material systems consisting of homogeneous bodies limited by quadric surfaces, i.e. planes, spheres, cylinders, etc. This report is intended not only to serve as a manual of the simulation package, but also to provide the user with the necessary information to understand the details of the Monte Carlo algorithm. (Author) 108 refs.
PENELOPE, and algorithm and computer code for Monte Carlo simulation of electron-photon showers
Energy Technology Data Exchange (ETDEWEB)
Salvat, F.; Fernandez-Varea, J.M.; Baro, J.; Sempau, J.
1996-10-01
The FORTRAN 77 subroutine package PENELOPE performs Monte Carlo simulation of electron-photon showers in arbitrary for a wide energy range, from similar{sub t}o 1 KeV to several hundred MeV. Photon transport is simulated by means of the standard, detailed simulation scheme. Electron and positron histories are generated on the basis of a mixed procedure, which combines detailed simulation of hard events with condensed simulation of soft interactions. A simple geometry package permits the generation of random electron-photon showers in material systems consisting of homogeneous bodies limited by quadric surfaces, i.e. planes, spheres cylinders, etc. This report is intended not only to serve as a manual of the simulation package, but also to provide the user with the necessary information to understand the details of the Monte Carlo algorithm.
SU-E-T-238: Monte Carlo Estimation of Cerenkov Dose for Photo-Dynamic Radiotherapy
Energy Technology Data Exchange (ETDEWEB)
Chibani, O; Price, R; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States); Eldib, A [Fox Chase Cancer Center, Philadelphia, PA (United States); University Cairo (Egypt); Mora, G [de Lisboa, Codex, Lisboa (Portugal)
2014-06-01
Purpose: Estimation of Cerenkov dose from high-energy megavoltage photon and electron beams in tissue and its impact on the radiosensitization using Protoporphyrine IX (PpIX) for tumor targeting enhancement in radiotherapy. Methods: The GEPTS Monte Carlo code is used to generate dose distributions from 18MV Varian photon beam and generic high-energy (45-MV) photon and (45-MeV) electron beams in a voxel-based tissueequivalent phantom. In addition to calculating the ionization dose, the code scores Cerenkov energy released in the wavelength range 375–425 nm corresponding to the pick of the PpIX absorption spectrum (Fig. 1) using the Frank-Tamm formula. Results: The simulations shows that the produced Cerenkov dose suitable for activating PpIX is 4000 to 5500 times lower than the overall radiation dose for all considered beams (18MV, 45 MV and 45 MeV). These results were contradictory to the recent experimental studies by Axelsson et al. (Med. Phys. 38 (2011) p 4127), where Cerenkov dose was reported to be only two orders of magnitude lower than the radiation dose. Note that our simulation results can be corroborated by a simple model where the Frank and Tamm formula is applied for electrons with 2 MeV/cm stopping power generating Cerenkov photons in the 375–425 nm range and assuming these photons have less than 1mm penetration in tissue. Conclusion: The Cerenkov dose generated by high-energy photon and electron beams may produce minimal clinical effect in comparison with the photon fluence (or dose) commonly used for photo-dynamic therapy. At the present time, it is unclear whether Cerenkov radiation is a significant contributor to the recently observed tumor regression for patients receiving radiotherapy and PpIX versus patients receiving radiotherapy only. The ongoing study will include animal experimentation and investigation of dose rate effects on PpIX response.
Alashrah, Saleh; Kandaiya, Sivamany; Maalej, Nabil; El-Taher, A
2014-12-01
Estimation of the surface dose is very important for patients undergoing radiation therapy. The purpose of this study is to investigate the dose at the surface of a water phantom at a depth of 0.007 cm as recommended by the International Commission on Radiological Protection and International Commission on Radiation Units and Measurement with radiochromic films (RFs), thermoluminescent dosemeters and an ionisation chamber in a 6-MV photon beam. The results were compared with the theoretical calculation using Monte Carlo (MC) simulation software (MCNP5, BEAMnrc and DOSXYZnrc). The RF was calibrated by placing the films at a depth of maximum dose (d(max)) in a solid water phantom and exposing it to doses from 0 to 500 cGy. The films were scanned using a transmission high-resolution HP scanner. The optical density of the film was obtained from the red component of the RGB images using ImageJ software. The per cent surface dose (PSD) and percentage depth dose (PDD) curve were obtained by placing film pieces at the surface and at different depths in the solid water phantom. TLDs were placed at a depth of 10 cm in a solid water phantom for calibration. Then the TLDs were placed at different depths in the water phantom and were exposed to obtain the PDD. The obtained PSD and PDD values were compared with those obtained using a cylindrical ionisation chamber. The PSD was also determined using Monte Carlo simulation of a LINAC 6-MV photon beam. The extrapolation method was used to determine the PSD for all measurements. The PSD was 15.0±3.6% for RF. The TLD measurement of the PSD was 16.0±5.0%. The (0.6 cm(3)) cylindrical ionisation chamber measurement of the PSD was 50.0±3.0%. The theoretical calculation using MCNP5 and DOSXYZnrc yielded a PSD of 15.0±2.0% and 15.7±2.2%. In this study, good agreement between PSD measurements was observed using RF and TLDs with the Monte Carlo calculation. However, the cylindrical chamber measurement yielded an overestimate of the PSD
Monte-Carlo simulation of a stochastic differential equation
Arif, ULLAH; Majid, KHAN; M, KAMRAN; R, KHAN; Zhengmao, SHENG
2017-12-01
For solving higher dimensional diffusion equations with an inhomogeneous diffusion coefficient, Monte Carlo (MC) techniques are considered to be more effective than other algorithms, such as finite element method or finite difference method. The inhomogeneity of diffusion coefficient strongly limits the use of different numerical techniques. For better convergence, methods with higher orders have been kept forward to allow MC codes with large step size. The main focus of this work is to look for operators that can produce converging results for large step sizes. As a first step, our comparative analysis has been applied to a general stochastic problem. Subsequently, our formulization is applied to the problem of pitch angle scattering resulting from Coulomb collisions of charge particles in the toroidal devices.
CloudMC: a cloud computing application for Monte Carlo simulation.
Miras, H; Jiménez, R; Miras, C; Gomà, C
2013-04-21
This work presents CloudMC, a cloud computing application-developed in Windows Azure®, the platform of the Microsoft® cloud-for the parallelization of Monte Carlo simulations in a dynamic virtual cluster. CloudMC is a web application designed to be independent of the Monte Carlo code in which the simulations are based-the simulations just need to be of the form: input files → executable → output files. To study the performance of CloudMC in Windows Azure®, Monte Carlo simulations with penelope were performed on different instance (virtual machine) sizes, and for different number of instances. The instance size was found to have no effect on the simulation runtime. It was also found that the decrease in time with the number of instances followed Amdahl's law, with a slight deviation due to the increase in the fraction of non-parallelizable time with increasing number of instances. A simulation that would have required 30 h of CPU on a single instance was completed in 48.6 min when executed on 64 instances in parallel (speedup of 37 ×). Furthermore, the use of cloud computing for parallel computing offers some advantages over conventional clusters: high accessibility, scalability and pay per usage. Therefore, it is strongly believed that cloud computing will play an important role in making Monte Carlo dose calculation a reality in future clinical practice.
Monte Carlo simulation of optical coherence tomography signal of the skin nevus
Dolganova, Irina N.; Neganova, Aleksandra S.; Kudrin, Konstantin G.; Zaytsev, Kirill I.; Reshetov, Igor V.
2016-01-01
Monte Carlo (MC) numerical simulation of light propagation in living tissue is widely used for characterization of optical coherence tomography (OCT) signals. Using MC simulation we obtained OCT images of skin with dysplastic nevus. Two various positions of skin nevus in depth were considered. Comparing these OCT simulation results with image of skin without nevus, we showed that OCT medical approach allows to detect dysplastic nevus at different stages of its life.
Directory of Open Access Journals (Sweden)
Liang Li
2016-06-01
Full Text Available Monte Carlo simulations were performed for three-dimensional Ising model to study the relationships between magnetic Barkhausen noise and elastic stress of steel. The magnetization process was simulated and the dimensionless magnetic Barkhausen noise was calculated by the differentiation of magnetization. Coupling constant of energy exchange in Ising model is considered to be inversely proportional to applied tensile stress. The simulation results show that as coupling constant decreases, the magnetic Barkhausen noise increases, as proved by the experimental results.
Meng, Xuehui; Huang, Yixiang; Wu, Shaolong; Liu, Qing
2014-06-01
To explore the application of Monte Carlo simulation in optimizing and adjusting the reimbursement scheme with regard to the New Rural Cooperative Medical System (NCMS) to scientific steering practice. Optimization of the reimbursement scheme in rural areas of China was also studied. A multi-stage sampling household survey was conducted in Sihui county, with 4 433 rural residents from 1 179 households from 13 towns in Guangdong province surveyed by self-designed questionnaire. Probit Regression Model was applied in fitting data and then estimating the own-price elasticity and cross elasticity of healthcare demand for both outpatients and inpatients. Monte Carlo simulation model was constructed to estimate the reimbursement effects of various alternative reimbursement schemes, by replicated simulation for one thousand times and each sampling on five hundred households. In this way, optimization of the implemented reimbursement scheme in Sihui county was conducted. Own-priced elasticity of demands for outpatient visit, inpatient visit in the township hospital center, secondary hospital and tertiary hospital were -0.174, -0.264, -0.675 and -0.429, respectively. Outpatient demand was affected by the per-visit price of township hospital center and secondary hospital. The cross-priced elasticity of demands for outpatient visit appeared to be 0.125 and 0.150. The reimbursement effects of Scheme B7 showed that the efficiency of NCMS fund was 17.85% , the reimbursement ratio for healthcare was 25.63%, and the decreased percentages of poverty caused by illness was 18.25%, more than 9.37%, from the implemented scheme A. So the implemented scheme was in need for optimization. Monte Carlo simulation technique was applicable to simulate the effects of the optimized alternative reimbursement scheme of NCMS and it provided a new idea and method to optimize and adjust the reimbursement scheme.
Fairbanks, Hillary R.; Doostan, Alireza; Ketelsen, Christian; Iaccarino, Gianluca
2017-07-01
Multilevel Monte Carlo (MLMC) is a recently proposed variation of Monte Carlo (MC) simulation that achieves variance reduction by simulating the governing equations on a series of spatial (or temporal) grids with increasing resolution. Instead of directly employing the fine grid solutions, MLMC estimates the expectation of the quantity of interest from the coarsest grid solutions as well as differences between each two consecutive grid solutions. When the differences corresponding to finer grids become smaller, hence less variable, fewer MC realizations of finer grid solutions are needed to compute the difference expectations, thus leading to a reduction in the overall work. This paper presents an extension of MLMC, referred to as multilevel control variates (MLCV), where a low-rank approximation to the solution on each grid, obtained primarily based on coarser grid solutions, is used as a control variate for estimating the expectations involved in MLMC. Cost estimates as well as numerical examples are presented to demonstrate the advantage of this new MLCV approach over the standard MLMC when the solution of interest admits a low-rank approximation and the cost of simulating finer grids grows fast.
Varouchakis, Emmanouil; Hristopulos, Dionissios
2013-04-01
estimates. Since pumping tests are not available, we determine the radius of influence using an empirical equation (Bear 1979) that involves the drawdown at the well face, the hydraulic conductivity around the pumping well, and the initial saturated thickness. Since the local variation of the drawdown and the hydraulic conductivity is not known, we use uniform values based on the Monte Carlo analysis below. The initial saturated thickness for all 70 wells is assumed to follow a linear trend estimated from the 10 piezometer readings and from the geological cross-sections available for the basin. Using linear regression analysis of the mean annual groundwater level, we estimate the rate of mean annual level decrease at 1.85 m/yr, with the 95% confidence interval at [1.60-2.10] m/yr. The optimal hydraulic conductivity over the drawdown and the hydraulic conductivity parameter space is determined by means of Monte Carlo sensitivity analysis and leave-one-out cross validation that focus on the reproduction of the measured head values. The removed head values during the validation procedure are estimated using RK. The mean absolute error (MAE) is used as the criterion of optimal performance. The hydraulic head trend function is estimated for each combination of the hydraulic conductivity and the drawdown. The residuals are modeled using several semivariogram models for each realization of the hydraulic conductivity and the drawdown tested. The Monte Carlo simulations show that the MAE is primarily sensitive to the variation of the hydraulic conductivity and less to the drawdown. The minimum MAE is obtained for a hydraulic conductivity of 0.00015 m/s and a drawdown equal to 1.85 m. The recently proposed Spartan semivariogram models for the residuals provide the most accurate estimates. Based on the above procedure, the range of the radius of influence is determined between 105 m and 160 m. The approach described above improves the MAE by 14% and the RMSE by 10% compared to similar
Rehfeld, Niklas S; Vauclin, Sébastien; Stute, Simon; Buvat, Irène
2010-06-21
Accurate modeling of system response and scatter distribution is crucial for image reconstruction in emission tomography. Monte Carlo simulations are very well suited to calculate these quantities. However, Monte Carlo simulations are also slow and many simulated counts are needed to provide a sufficiently exact estimate of the detection probabilities. In order to overcome these problems, we propose to split the simulation into two parts, the detection system and the object to be imaged (the patient). A so-called 'virtual boundary' that separates these two parts is introduced. Within the patient, particles are simulated conventionally. Whenever a photon reaches the virtual boundary, its detection probability is calculated analytically by evaluating a multi-dimensional B-spline that depends on the photon position, direction and energy. The unknown B-spline knot values that define this B-spline are fixed by a prior 'pre-' simulation that needs to be run once for each scanner type. After this pre-simulation, the B-spline model can be used in any subsequent simulation with different patients. We show that this approach yields accurate results when simulating the Biograph 16 HiREZ PET scanner with Geant4 Application for Emission Tomography (GATE). The execution time is reduced by a factor of about 22 x (scanner with voxelized phantom) to 30 x (empty scanner) with respect to conventional GATE simulations of same statistical uncertainty. The pre-simulation and calculation of the B-spline knots values could be performed within half a day on a medium-sized cluster.
MONTE CARLO METHOD AND APPLICATION IN @RISK SIMULATION SYSTEM
Directory of Open Access Journals (Sweden)
Gabriela Ižaríková
2015-12-01
Full Text Available The article is an example of using the software simulation @Risk designed for simulation in Microsoft Excel spread sheet, demonstrated the possibility of its usage in order to show a universal method of solving problems. The simulation is experimenting with computer models based on the real production process in order to optimize the production processes or the system. The simulation model allows performing a number of experiments, analysing them, evaluating, optimizing and afterwards applying the results to the real system. A simulation model in general is presenting modelling system by using mathematical formulations and logical relations. In the model is possible to distinguish controlled inputs (for instance investment costs and random outputs (for instance demand, which are by using a model transformed into outputs (for instance mean value of profit. In case of a simulation experiment at the beginning are chosen controlled inputs and random (stochastic outputs are generated randomly. Simulations belong into quantitative tools, which can be used as a support for a decision making.
Monte Carlo Simulation Investigating Threading of Poly(ethylene oxide) in the Melt
Helfer, Carin; Xu, Guoqiang; Mattice, Wayne; Pugh, Coleen
2003-03-01
Polyrotaxanes are composed of multiple cyclic compounds threaded with a linear polymer. The predicted unusual physical and chemical properties resulting from the lack of covalent bonds between their cyclic and linear components have caused interest in synthesizing polyrotaxanes. In the current study, a coarse-grained Monte Carlo method on the second nearest neighbor diamond lattice (2nnd lattice) is established to investigate the threading of cyclic PEO molecules by linear PEO chains in the melt. This method allows for reverse-mapping of the coarse-grained chains to the fully atomistic detail in continuous space at any interval of the simulation. The short-range interactions that result from the local chain conformation are based on the Rotational Isomeric State model. A discretized Lennard-Jones (LJ) potential energy function is used to describe the long-range interactions with LJ parameters estimated since the experimental values are not available. The construction and validation of the simulation will be presented. Results of the fraction of cyclics threaded as a function of either Xc, the mass fraction of cyclics, or Nc, the size of the cyclics, will be discussed. * Supported by funds from NASA John H. Glenn research center.
Extreme Value Predictions using Monte Carlo Simulations with Artificially Increased Wave Height
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2010-01-01
It is well known from linear analyses in a stochastic seaway that the mean out-crossing rate ν(r) of a level r is given by ν(0)exp(-0.5ß2)where the reliability index ß=r/Sr. Here Sr is the standard deviation of the response and, hence, linearly dependent on the sig-nificant wave height Hs. For non......-linear processes the reliability index depends non-linearly on the response level r, and a good estimate can be found using the First Order Reliability Method (FORM). The reliability index from the FORM analysis is still strictly inversely proportional to the severity of the sea state, i.e. ß=c(r)/Hs. A more...... height can be used in Monte Carlo simulations to increase the out-crossing rates and thus reduce the necessary length of the time domain simulations by applying a larger significant wave height than actually required for design. The mean out-crossing rate thus obtained can then afterwards be scaled down...
Electron energy distributions through superdense matter by Monte-Carlo simulations
Directory of Open Access Journals (Sweden)
Okabayashi A.
2013-11-01
Full Text Available We have studied energy distribution of fast electrons passing through a highly compressed core plasma for fast ignition research in inertial confinement fusion. Recent PIC calculations indicate that the collective effect of electric and magnetic fields on the transport may be less significant than the binary collisions in the case of a high density fusion pellet. In order to understand the net effect of binary collisions in dense plasma, we calculate electron energy distributions at several viewing angles using an electromagnetic cascade Monte-Carlo simulation, EGS5, for estimation of the contribution of multi collisional process. Here, the construction of physical parameters in the code were taken from the calculation results given by 2 dimensional particle-in-cell simulations. In the result, the number of electrons detected on the laser axis within the range to 15 MeV significantly decreases for the superdense region (max: 1.6⋅1025[/cm3] compared with the low density plasma. The reduction on the electron number decreases with increase of observation angles gradually and finally the number almost coincides more than 40 degrees.
Energy Technology Data Exchange (ETDEWEB)
Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr
2012-01-01
Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.
Exploring uncertainty in glacier mass balance modelling with Monte Carlo simulation
Directory of Open Access Journals (Sweden)
H. Machguth
2008-12-01
Full Text Available By means of Monte Carlo simulations we calculated uncertainty in modelled cumulative mass balance over 400 days at one particular point on the tongue of Morteratsch Glacier, Switzerland, using a glacier energy balance model of intermediate complexity. Before uncertainty assessment, the model was tuned to observed mass balance for the investigated time period and its robustness was tested by comparing observed and modelled mass balance over 11 years, yielding very small deviations. Both systematic and random uncertainties are assigned to twelve input parameters and their respective values estimated from the literature or from available meteorological data sets. The calculated overall uncertainty in the model output is dominated by systematic errors and amounts to 0.7 m w.e. or approximately 10% of total melt over the investigated time span. In order to provide a first order estimate on variability in uncertainty depending on the quality of input data, we conducted a further experiment, calculating overall uncertainty for different levels of uncertainty in measured global radiation and air temperature. Our results show that the output of a well calibrated model is subject to considerable uncertainties, in particular when applied for extrapolation in time and space where systematic errors are likely to be an important issue.
Directory of Open Access Journals (Sweden)
Lars Kreutzburg
2017-02-01
Full Text Available The total enthalpies of the 16 different spin conﬁgurations that can be realized in the unit cell of the archetype spin crossover complex [Fe(phen2(NCS2] (phen = 1,2-phenanthroline were calculated, applying periodic density functional theory combined with the Hubbard model and the Grimme-D2 dispersion correction (DFT+U+D2. The obtained enthalpy differences between the individual spin conﬁgurations were used to determine spin couplings of an Ising-like model, and subsequent Monte Carlo simulations for this model allowed the estimation of the phenomenological interaction parameter Γ of the Slichter–Drickamer model, which is commonly used to describe the cooperativity of the spin transition. The calculation procedure described here-which led to an estimate of about 3 kJ·mol-1 for Γ, in good agreement with experiment—may be used to predict from ﬁrst principles how modiﬁcations of spin crossover complexes can change the character of the spin transition from gradual to abrupt and vice versa.
An Ab Initio and Kinetic Monte Carlo Simulation Study of Lithium Ion Diffusion on Graphene
Directory of Open Access Journals (Sweden)
Kehua Zhong
2017-07-01
Full Text Available The Li+ diffusion coefficients in Li+-adsorbed graphene systems were determined by combining first-principle calculations based on density functional theory with Kinetic Monte Carlo simulations. The calculated results indicate that the interactions between Li ions have a very important influence on lithium diffusion. Based on energy barriers directly obtained from first-principle calculations for single-Li+ and two-Li+ adsorbed systems, a new equation predicting energy barriers with more than two Li ions was deduced. Furthermore, it is found that the temperature dependence of Li+ diffusion coefficients fits well to the Arrhenius equation, rather than meeting the equation from electrochemical impedance spectroscopy applied to estimate experimental diffusion coefficients. Moreover, the calculated results also reveal that Li+ concentration dependence of diffusion coefficients roughly fits to the equation from electrochemical impedance spectroscopy in a low concentration region; however, it seriously deviates from the equation in a high concentration region. So, the equation from electrochemical impedance spectroscopy technique could not be simply used to estimate the Li+ diffusion coefficient for all Li+-adsorbed graphene systems with various Li+ concentrations. Our work suggests that interactions between Li ions, and among Li ion and host atoms will influence the Li+ diffusion, which determines that the Li+ intercalation dependence of Li+ diffusion coefficient should be changed and complex.
Souris, Kevin; Lee, John Aldo; Sterpin, Edmond
2016-04-01
Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the gate/geant4 Monte Carlo application for homogeneous and heterogeneous geometries. Comparisons with gate/geant4 for various geometries show deviations within 2%-1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10(7) primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
Energy Technology Data Exchange (ETDEWEB)
Souris, Kevin, E-mail: kevin.souris@uclouvain.be; Lee, John Aldo [Center for Molecular Imaging and Experimental Radiotherapy, Institut de Recherche Expérimentale et Clinique, Université catholique de Louvain, Avenue Hippocrate 54, 1200 Brussels, Belgium and ICTEAM Institute, Université catholique de Louvain, Louvain-la-Neuve 1348 (Belgium); Sterpin, Edmond [Center for Molecular Imaging and Experimental Radiotherapy, Institut de Recherche Expérimentale et Clinique, Université catholique de Louvain, Avenue Hippocrate 54, 1200 Brussels, Belgium and Department of Oncology, Katholieke Universiteit Leuven, O& N I Herestraat 49, 3000 Leuven (Belgium)
2016-04-15
Purpose: Accuracy in proton therapy treatment planning can be improved using Monte Carlo (MC) simulations. However the long computation time of such methods hinders their use in clinical routine. This work aims to develop a fast multipurpose Monte Carlo simulation tool for proton therapy using massively parallel central processing unit (CPU) architectures. Methods: A new Monte Carlo, called MCsquare (many-core Monte Carlo), has been designed and optimized for the last generation of Intel Xeon processors and Intel Xeon Phi coprocessors. These massively parallel architectures offer the flexibility and the computational power suitable to MC methods. The class-II condensed history algorithm of MCsquare provides a fast and yet accurate method of simulating heavy charged particles such as protons, deuterons, and alphas inside voxelized geometries. Hard ionizations, with energy losses above a user-specified threshold, are simulated individually while soft events are regrouped in a multiple scattering theory. Elastic and inelastic nuclear interactions are sampled from ICRU 63 differential cross sections, thereby allowing for the computation of prompt gamma emission profiles. MCsquare has been benchmarked with the GATE/GEANT4 Monte Carlo application for homogeneous and heterogeneous geometries. Results: Comparisons with GATE/GEANT4 for various geometries show deviations within 2%–1 mm. In spite of the limited memory bandwidth of the coprocessor simulation time is below 25 s for 10{sup 7} primary 200 MeV protons in average soft tissues using all Xeon Phi and CPU resources embedded in a single desktop unit. Conclusions: MCsquare exploits the flexibility of CPU architectures to provide a multipurpose MC simulation tool. Optimized code enables the use of accurate MC calculation within a reasonable computation time, adequate for clinical practice. MCsquare also simulates prompt gamma emission and can thus be used also for in vivo range verification.
Monte Carlo simulation of three-dimensional islands
Tan, Sovirith; Lam, Pui-Man
1999-09-01
The usual kinetic Monte Carlo method is adapted, to treat off-lattice problems of multilayer growth (coverage θ>1) by molecular-beam epitaxy. This method takes into account the Schwoebel barrier, which comes out as a result of the choice of the potential interaction between the atoms. This method allows a free choice of the lattice mismatch, temperature, deposition flux rate, and interfacial energies. A particular choice of these parameters leads to the three-dimensional (3D) (Volmer-Weber) growth mode, whereas another choice of these parameters leads to the 2D-3D growth mode (Stranski-Krastanov). The 3D islands seem to obey scaling only approximately. Using this method, the surface stress inside a substrate and a (pyramidal) coherent 3D island is computed. Strong relaxations appear, not only at the edges of the 3D island (which is expected), but also in the proximity of the edges, and inside the 3D island. These particular sites inside the 3D island are located just beneath a step site of the upper layer. Moreover, these particular sites develop strong corrugations, which later are propagating along the layer. Strain-induced modulation of layers is thermally activated, so the steps could act as defects and nucleation sites for propagating roughness, in agreement with some theories and experimental facts.
Monte Carlo simulations for generic granite repository studies
Energy Technology Data Exchange (ETDEWEB)
Chu, Shaoping [Los Alamos National Laboratory; Lee, Joon H [SNL; Wang, Yifeng [SNL
2010-12-08
In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport models were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.
González, W.; Lallena, A. M.; Alfonso, R.
2011-06-01
Micro-multileaf collimators are devices that are added to LINAC heads for stereotactic radiosurgery. In this work, the performance of an Elekta Precise LINAC with a dynamic micro-multileaf collimator manufactured by 3D-line has been studied. Monte Carlo simulations based on PENELOPE code and measurements with three different detectors (PTW Semiflex 31010 chamber, PTW PinPoint 31016 chamber and PTW Diode 60008) have been carried out. Simulations were tuned by reproducing the experimental TPR20, 10 quality index, providing a nice description of both the PDD curve and the transverse profiles at the two depths measured. The geometry of the micro-multileaf collimator was tested by calculating the transmission through it, and it was needed to significantly reduce the leaf separation indicated by the manufacturer to reproduce the experimental results. An approximate simulation in which the transport of the particles traversing the dynamic micro-multileaf collimator was described in a simplified way was analyzed, providing good agreement with the full simulations. With the MC model fixed, output factors for various field sizes were calculated and compared to the experimental ones, obtaining good agreement. Percentage depth doses (PDDs) and transverse profiles at two depths measured with the diode for small fields were well reproduced by the simulation, while the measurements performed with the PinPoint chamber showed differences in the PDDs, at large depths, and transverse profiles, at the penumbra. Monte Carlo simulations and Semiflex and diode measurements, performed for a 7.0 cm × 7.0 cm field, were in nice agreement, while those obtained with the PinPoint chamber showed differences that increased with the depth in water. At the phantom entrance, all measurements showed non-negligible differences that made Monte Carlo a good option to estimate the absorbed dose in this region.
Hori, Takuma; Shiomi, Junichiro
2013-03-01
Nanostructuring are efficient process to lower the lattice thermal conductivity and thus enhance thermoelectric performance of semiconducting materials. Here, detailed knowledge of phonon transport properties in the nanostructures is needed for prediction of performance and/or optimization of structures. The approach to solve the linearized phonon Boltzmann transport equations stochastically by Monte Carlo method has been demonstrated to be useful to obtain phonon transport properties in mesoscale and complex structures. In this study, we have performed the Monte Carlo simulations to investigate phonon transport properties in nanostructured thermoelectric materials. With the mode-dependent bulk phonon transport properties obtained by first-principles-based calculations, the Monte Carlo simulations are performed to investigate the influence of nanostructure length-scales on the mode-dependent lattice thermal conductivity and its sensitivity to interfacial phonon transmission. This work is partially supported by the Japan Society for the Promotion of Science and JST PRESTO.
Lattice Boltzmann accelerated direct simulation Monte Carlo for dilute gas flow simulations.
Di Staso, G; Clercx, H J H; Succi, S; Toschi, F
2016-11-13
Hybrid particle-continuum computational frameworks permit the simulation of gas flows by locally adjusting the resolution to the degree of non-equilibrium displayed by the flow in different regions of space and time. In this work, we present a new scheme that couples the direct simulation Monte Carlo (DSMC) with the lattice Boltzmann (LB) method in the limit of isothermal flows. The former handles strong non-equilibrium effects, as they typically occur in the vicinity of solid boundaries, whereas the latter is in charge of the bulk flow, where non-equilibrium can be dealt with perturbatively, i.e. according to Navier-Stokes hydrodynamics. The proposed concurrent multiscale method is applied to the dilute gas Couette flow, showing major computational gains when compared with the full DSMC scenarios. In addition, it is shown that the coupling with LB in the bulk flow can speed up the DSMC treatment of the Knudsen layer with respect to the full DSMC case. In other words, LB acts as a DSMC accelerator.This article is part of the themed issue 'Multiscale modelling at the physics-chemistry-biology interface'. © 2016 The Author(s).
Energy Technology Data Exchange (ETDEWEB)
Garcia, Claudio; Costa, Artur; Bittencourt, Euclides [TRANSPETRO - PETROBRAS Transporte, Rio de Janeiro, RJ (Brazil)
2005-07-01
Due to the growing relevance of safety and environmental protection policies in PETROBRAS and its subsidiaries, as well as official regulatory agencies and population requirements, integrity management of oil and gas pipelines became a priority activity in TRANSPETRO, involving several sectors of the company's Support Management Department. Inspection activities using intelligent PIGs, field correlations and replacement of pipeline segments are known as high cost operations and request complex logistics. Thus, it is imperative the adoption of management tools that optimize the available resources. This study presents Monte Carlo simulation method as an additional tool for evaluation and management of pipeline structural integrity. The method consists in foreseeing future physical conditions of most significant defects found in intelligent PIG In Line Inspections based on a probabilistic approach. Through Monte Carlo simulation, probability functions of failure for each defect are produced, helping managers to decide which repairs should be executed in order to reach the desired or accepted risk level. The case that illustrates this study refers to the reconditioning of ORSOL 14'' (35,56 mm) pipeline. This pipeline was constructed to transfer petroleum from Urucu's production fields to Solimoes port, in Coari, city in Brazilian Amazon Region. The result of this analysis indicated critical points for repair, in addition to the results obtained by the conventional evaluation (deterministic ASME B-31G method). Due to the difficulties to mobilize staff and execute necessary repairs in remote areas like Amazon forest, the probabilistic tool was extremely useful, improving pipeline integrity level and avoiding future additional costs. (author)
Estimation of Stochastic Frontier Models with Fixed Effects through Monte Carlo Maximum Likelihood
Directory of Open Access Journals (Sweden)
Grigorios Emvalomatis
2011-01-01
Full Text Available Estimation of nonlinear fixed-effects models is plagued by the incidental parameters problem. This paper proposes a procedure for choosing appropriate densities for integrating the incidental parameters from the likelihood function in a general context. The densities are based on priors that are updated using information from the data and are robust to possible correlation of the group-specific constant terms with the explanatory variables. Monte Carlo experiments are performed in the specific context of stochastic frontier models to examine and compare the sampling properties of the proposed estimator with those of the random-effects and correlated random-effects estimators. The results suggest that the estimator is unbiased even in short panels. An application to a cross-country panel of EU manufacturing industries is presented as well. The proposed estimator produces a distribution of efficiency scores suggesting that these industries are highly efficient, while the other estimators suggest much poorer performance.
Gradient angle estimation by uniform directional simulation on a cone
DEFF Research Database (Denmark)
Ditlevsen, Ove Dalager
1997-01-01
of these projections is derived assuming the limit-state surface to be a hyperplane. This distribution depends on the angle between the cone axis and the normal vector to the hyperplane. Assuming sufficient flatness of the actual limit-state surface within a neighbourhood of the cut point with the cone axis, the cone...... top angle can be chosen small enough that this distribution can be taken as the basis for the formulation of the likelihood function of the angle given the sample of projections. The angle of maximum likelihood is then the indicator of whether the cut point can be taken as a sufficiently accurate...... approximation to a locally most central limit state point. Moreover, the estimated angle can be used to correct the geometric reliability index.\\bfseries Keywords: Directional simulation, effectivity factor, gradient angle estimation, maximum likelihood, model-correction-factor method, Monte Carlo simulation...
Two-stage stochastic linear programming by a series of Monte-Carlo estimators
Directory of Open Access Journals (Sweden)
Kęstutis Žilinskas
2015-07-01
Full Text Available In this paper a stochastic adaptive method has been developed to solve stochastic linear problems by a finite sequence of Monte-Carlo sampling estimators. The method is based on the adaptive regulation of the size of Monte-Carlo samples and a statistical termination procedure taking into consideration statistical modelling accuracy. Our approach distinguishes itself by the treatment of accuracy of the solution in a statistical manner, testing the hypothesis of optimality according to statistical criteria, and estimating confidence intervals of the objective and constraint functions. To avoid “jamming” or “zigzagging” solving a constraint problem we implement the ε–feasible direction approach. The proposed adjustment of a sample size, when it is taken inversely proportional to the square of the norm of the Monte-Carlo estimate of the gradient, guarantees convergence a. s. at a linear rate. The numerical study and examples in practice corroborate theoretical conclusions and show that the developed procedures make it possible to solve stochastic problems with sufficient accuracy by the means of an acceptable size of computations.DOI: 10.15181/csat.v2i2.891
The Monte-Carlo simulation on a scintillator neutron detector
Wu, Chong; Tang, Bin; Sun, ZhiJia; Zhang, Qiang; Yang, Zhen; Luo, Wei; Wang, Tuo
2013-10-01
A simulation of the properties of the shifting scintillator neutron detector using 6LiF/ZnS(Ag) scintillation screens is performed. The simulation results show that the light attenuation length of standard BC704 scintillator is about 0.65 mm. Its thermal neutron detection efficiency, gamma sensitivity and intrinsic spatial resolution can achieve around 50.0%, 10-5 and 0.18 mm (along X-axis) respectively. For the detector, air coupling position resolution is better than the silicone oil coupling. Some of the simulation results are compared with experimental results. They are in agreement. This work will be helpful for constructing neutron detector for high intensity powder diffractometer at Chinese spallation neutron source.
Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Moser, M., E-mail: marcus.moser@unibw.de [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Fakultät für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany); Reichart, P.; Bergmaier, A.; Greubel, C. [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Fakultät für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany); Schiettekatte, F. [Université de Montréal, Département de Physique, Montréal, QC H3C 3J7 (Canada); Dollinger, G., E-mail: guenther.dollinger@unibw.de [Universität der Bundeswehr München, Institut für Angewandte Physik und Messtechnik LRT2, Fakultät für Luft- und Raumfahrttechnik, 85577 Neubiberg (Germany)
2016-03-15
Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton–proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.
Adsorption equilibrium of light hydrocarbon mixtures by monte carlo simulation
Directory of Open Access Journals (Sweden)
V. F. Cabral
2007-12-01
Full Text Available The procedure presented by Cabral et al. (2003 was used to predict the adsorption of multicomponent mixtures of methane, ethane, propane, and n-butane adsorbed on Silicalite S-115 at 300 K. The methodology employed uses the algorithm of molecular simulation for the grand canonical ensemble as an equation of state for the adsorbed phase. The adsorbent surface is modeled as a two-dimensional lattice in which solid heterogeneity is represented by of two kinds of sites with different adsorption energies. In all cases presented, the simulations described well the adsorption characteristics of the systems.
Tennant, Marc; Kruger, Estie
2013-02-01
This study developed a Monte Carlo simulation approach to examining the prevalence and incidence of dental decay using Australian children as a test environment. Monte Carlo simulation has been used for a half a century in particle physics (and elsewhere); put simply, it is the probability for various population-level outcomes seeded randomly to drive the production of individual level data. A total of five runs of the simulation model for all 275,000 12-year-olds in Australia were completed based on 2005-2006 data. Measured on average decayed/missing/filled teeth (DMFT) and DMFT of highest 10% of sample (Sic10) the runs did not differ from each other by more than 2% and the outcome was within 5% of the reported sampled population data. The simulations rested on the population probabilities that are known to be strongly linked to dental decay, namely, socio-economic status and Indigenous heritage. Testing the simulated population found DMFT of all cases where DMFT0 was 2.3 (n = 128,609) and DMFT for Indigenous cases only was 1.9 (n = 13,749). In the simulation population the Sic25 was 3.3 (n = 68,750). Monte Carlo simulations were created in particle physics as a computational mathematical approach to unknown individual-level effects by resting a simulation on known population-level probabilities. In this study a Monte Carlo simulation approach to childhood dental decay was built, tested and validated. © 2013 FDI World Dental Federation.
Slope stability effects of fuel management strategies – inferences from Monte Carlo simulations
R. M. Rice; R. R. Ziemer; S. C. Hankin
1982-01-01
A simple Monte Carlo simulation evaluated the effect of several fire management strategies on soil slip erosion and wildfires. The current condition was compared to (1) a very intensive fuelbreak system without prescribed fires, and (2) prescribed fire at four time intervals with (a) current fuelbreaks and (b) intensive fuel-breaks. The intensive fuelbreak system...
Prediction of ICP Pose Uncertainties Using Monte Carlo Simulation with Synthetic Depth Images
DEFF Research Database (Denmark)
Iversen, Thorbjørn Mosekjær; Buch, Anders Glent; Kraft, Dirk
2017-01-01
on the generation of synthetic depth images in a Monte Carlo simulation. In this paper we demonstrate our method for depth sensors which rely on Kinect v1 like technology. We evaluate our method using real depth sensor recordings from the publicly available BigBird dataset. The evaluation shows that the uncertainty...
Generation of Random Numbers and Parallel Random Number Streams for Monte Carlo Simulations
Directory of Open Access Journals (Sweden)
L. Yu. Barash
2012-01-01
Full Text Available Modern methods and libraries for high quality pseudorandom number generation and for generation of parallel random number streams for Monte Carlo simulations are considered. The probability equidistribution property and the parameters when the property holds at dimensions up to logarithm of mesh size are considered for Multiple Recursive Generators.
Confronting uncertainty in model-based geostatistics using Markov Chain Monte Carlo simulation
Minasny, B.; Vrugt, J.A.; McBratney, A.B.
2011-01-01
This paper demonstrates for the first time the use of Markov Chain Monte Carlo (MCMC) simulation for parameter inference in model-based soil geostatistics. We implemented the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) algorithm to jointly summarize the posterior
DEFF Research Database (Denmark)
Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław
2017-01-01
We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...
Hydration structure of Ti(III) and Cr(III): Monte Carlo simulation ...
African Journals Online (AJOL)
Classical Monte Carlo simulations were performed to investigate the solvation structures of Ti(III) and Cr(III) ions in water with only ion-water pair interaction potential and by including three-body correction terms. The hydration structures were evaluated in terms of radial distribution functions, coordination numbers and ...
Modeling root-reinforcement with a Fiber-Bundle Model and Monte Carlo simulation
This paper uses sensitivity analysis and a Fiber-Bundle Model (FBM) to examine assumptions underpinning root-reinforcement models. First, different methods for apportioning load between intact roots were investigated. Second, a Monte Carlo approach was used to simulate plants with heartroot, platero...
The hard ellipsoid-of-revolution fluid I. Monte Carlo simulations
Frenkel, D.; Mulder, B.M.
1985-01-01
We present the results of Monte Carlo simulations on a system of hard ellipsoids of revolution with length-to-breadth ratios a/b = 3, 2·75, 2, 1·25 and b/a = 3, 2·75, 2, 1·25. We identify four distinct phases, viz. isotropic fluid, nematic fluid, ordered solid and plastic solid. The coexistence
Monte Carlo simulations of a two-dimensional hard dimer system
Wojciechowski, K.W.; Branka, A.; Frenkel, D.
1993-01-01
Monte Carlo simulations of a system of two-dimensional hard, homonuclear dimers are reported. The equation-of-state, structural and orientational properties, and the free energy were computed for the fluid phase and several crystalline and non-crystalline (non-periodic) solid structures. The
A systematic framework for Monte Carlo simulation of remote sensing errors map in carbon assessments
S. Healey; P. Patterson; S. Urbanski
2014-01-01
Remotely sensed observations can provide unique perspective on how management and natural disturbance affect carbon stocks in forests. However, integration of these observations into formal decision support will rely upon improved uncertainty accounting. Monte Carlo (MC) simulations offer a practical, empirical method of accounting for potential remote sensing errors...
Confinement effect in diffusion-controlled stepwise polymerization by Monte Carlo simulation
Malvaldi, M; Bruzzone, S; Picchioni, F
2006-01-01
Diffusion-controlled stepwise polymerization of a linear polymer confined in nanoscopic slits is simulated through a Monte Carlo approach. A noticeable influence of the confinement on the kinetics is found. The confinement modifies both the spatial pair distribution function and the diffusive
Testing the Intervention Effect in Single-Case Experiments: A Monte Carlo Simulation Study
Heyvaert, Mieke; Moeyaert, Mariola; Verkempynck, Paul; Van den Noortgate, Wim; Vervloet, Marlies; Ugille, Maaike; Onghena, Patrick
2017-01-01
This article reports on a Monte Carlo simulation study, evaluating two approaches for testing the intervention effect in replicated randomized AB designs: two-level hierarchical linear modeling (HLM) and using the additive method to combine randomization test "p" values (RTcombiP). Four factors were manipulated: mean intervention effect,…
Monte Carlo simulation of diblock copolymer microphases by means of a 'fast' off-lattice model
DEFF Research Database (Denmark)
Besold, Gerhard; Hassager, O.; Mouritsen, Ole G.
1999-01-01
We present a mesoscopic off-lattice model for the simulation of diblock copolymer melts by Monte Carlo techniques. A single copolymer molecule is modeled as a discrete Edwards chain consisting of two blocks with vertices of type A and B, respectively. The volume interaction is formulated in terms...
Quantum Monte Carlo Methods for First Principles Simulation of Liquid Water
Gergely, John Robert
2009-01-01
Obtaining an accurate microscopic description of water structure and dynamics is of great interest to molecular biology researchers and in the physics and quantum chemistry simulation communities. This dissertation describes efforts to apply quantum Monte Carlo methods to this problem with the goal of making progress toward a fully "ab initio"…
Lee, Anthony; Yau, Christopher; Giles, Michael B.; Doucet, Arnaud; Holmes, Christopher C.
2011-01-01
We present a case-study on the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Graphics cards, containing multiple Graphics Processing Units (GPUs), are self-contained parallel computational devices that can be housed in conventional desktop and laptop computers and can be thought of as prototypes of the next generation of many-core processors. For certain classes of population-based Monte Carlo algorithms they offer massively parallel simulation, with the added advantage over conventional distributed multi-core processors that they are cheap, easily accessible, easy to maintain, easy to code, dedicated local devices with low power consumption. On a canonical set of stochastic simulation examples including population-based Markov chain Monte Carlo methods and Sequential Monte Carlo methods, we nd speedups from 35 to 500 fold over conventional single-threaded computer code. Our findings suggest that GPUs have the potential to facilitate the growth of statistical modelling into complex data rich domains through the availability of cheap and accessible many-core computation. We believe the speedup we observe should motivate wider use of parallelizable simulation methods and greater methodological attention to their design. PMID:22003276
Monte Carlo simulation - a powerful tool to support experimental activities in structure reliability
Energy Technology Data Exchange (ETDEWEB)
Yuritzinn, T. [CEA Saclay, Dept. de Mecanique et de Technologie (DRN/DMT/SEMT/LISN), 91 - Gif-sur-Yvette (France); Chapuliot, S. [CEA Saclay, Dept. Modelisation de Systemes et Structures (DM2S/SEMT), 91 - Gif sur Yvette (France); Eid, M. [CEA Saclay, Dept. de Mecanique et de Technologie (DRN/DMT/SERMA/LCA), 91 - Gif-sur-Yvette (France); Masson, R.; Dahl, A.; Moinereau, D. [Electricite de France (EDF), 75 - Paris (France)
2003-07-01
Monte-Carlo Simulation (MCS) can have different uses in supporting structure reliability investigations and assessments. In this paper we focus our interest on the use of MCS as a numerical tool to support the fitting of the experimental data related to toughness experiments. (authors)
High-pressure high-temperature equation of state of graphite from Monte Carlo simulations
Colonna, F.; Fasolino, A.; Meijer, E.J.
2011-01-01
The thermoelastic behavior of graphite is experimentally accessible in a limited range of pressures and temperatures. Here we perform Monte Carlo simulations based on the accurate long range carbon bond-order potential (LCBOPII) in order to study graphite in a wider range of thermodynamic
Peraud, Jean-Philippe
2011-01-01
We present a new Monte Carlo method for obtaining solutions of the Boltzmann equation for describing phonon transport in micro and nanoscale devices. The proposed method can resolve arbitrarily small signals (e.g. temperature differences) at small constant cost and thus represents a considerable improvement compared to traditional Monte Carlo methods whose cost increases quadratically with decreasing signal. This is achieved via a control-variate variance reduction formulation in which the stochastic particle description only solves for the deviation from a nearby equilibrium, while the latter is described analytically. We also show that simulating an energy-based Boltzmann equation results in an algorithm that lends itself naturally to exact energy conservation thereby considerably improving the simulation fidelity. Simulations using the proposed method are used to investigate the effect of porosity on the effective thermal conductivity of silicon. We also present simulations of a recently developed thermal ...
Extreme Value Predictions using Monte Carlo Simulations with Artificially Increased Load Spectrum
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2011-01-01
an approximation to the mean outcrossing rate. Better accuracy can be obtained by Monte Carlo simulations, but the necessary length of the time domain simulations for very low out-crossing rates might be prohibitively long. In such cases the property mentioned above for the FORM reliability index might be assumed...... to be valid in the Monte Carlo simulations, making it possible to increase the out-crossing rates and thus reduce the necessary length of the time domain simulations by applying a larger load spectrum than relevant from a design point of view. The mean out-crossing rate thus obtained can then afterwards...... be found using the First Order Reliability Method (FORM). The FORM analysis also shows that the reliability index is strictly inversely proportional to the square root of the magnitude of the load spectrum, irrespectively of the non-linearity in the system. However, the FORM analysis only gives...
Directory of Open Access Journals (Sweden)
He Deyu
2016-09-01
Full Text Available Assessing the risks of steering system faults in underwater vehicles is a human-machine-environment (HME systematic safety field that studies faults in the steering system itself, the driver’s human reliability (HR and various environmental conditions. This paper proposed a fault risk assessment method for an underwater vehicle steering system based on virtual prototyping and Monte Carlo simulation. A virtual steering system prototype was established and validated to rectify a lack of historic fault data. Fault injection and simulation were conducted to acquire fault simulation data. A Monte Carlo simulation was adopted that integrated randomness due to the human operator and environment. Randomness and uncertainty of the human, machine and environment were integrated in the method to obtain a probabilistic risk indicator. To verify the proposed method, a case of stuck rudder fault (SRF risk assessment was studied. This method may provide a novel solution for fault risk assessment of a vehicle or other general HME system.
Monte Carlo Simulation to Estimate Likelihood of Direct Lightning Strikes
Mata, Carlos; Medelius, Pedro
2008-01-01
A software tool has been designed to quantify the lightning exposure at launch sites of the stack at the pads under different configurations. In order to predict lightning strikes to generic structures, this model uses leaders whose origins (in the x-y plane) are obtained from a 2D random, normal distribution.
Programs for calibration-based Monte Carlo simulation of recharge areas.
Starn, J Jeffrey; Bagtzoglou, Amvrossios C
2012-01-01
One use of groundwater flow models is to simulate contributing recharge areas to wells or springs. Particle tracking can be used to simulate these recharge areas, but in many cases the modeler is not sure how accurate these recharge areas are because parameters such as hydraulic conductivity and recharge have errors associated with them. The scripts described in this article (GEN_LHS and MCDRIVER_LHS) use the Python scripting language to run a Monte Carlo simulation with Latin hypercube sampling where model parameters such as hydraulic conductivity and recharge are randomly varied for a large number of model simulations, and the probability of a particle being in the contributing area of a well is calculated based on the results of multiple simulations. Monte Carlo simulation provides one useful measure of the variability in modeled particles. The Monte Carlo method described here is unique in that it uses parameter sets derived from the optimal parameters, their standard deviations, and their correlation matrix, all of which are calculated during nonlinear regression model calibration. In addition, this method uses a set of acceptance criteria to eliminate unrealistic parameter sets. Ground Water © 2011, National Ground Water Association. Published 2011. This article is a U.S. Government work and is in the public domain in the USA.
A Monte Carlo approach for simulating the propagation of partially coherent x-ray beams
DEFF Research Database (Denmark)
Prodi, A.; Bergbäck Knudsen, Erik; Willendrup, Peter Kjær
2011-01-01
by sampling Huygens-Fresnel waves with Monte Carlo methods and is used to propagate each source realization to the detector plane. The sampling is implemented with a modified Monte Carlo ray tracing scheme where the optical path of each generated ray is stored. Such information is then used in the summation...... of the generated rays at the observation plane to account for coherence properties. This approach is used to simulate simple models of propagation in free space and with reflective and refractive optics. © 2011 COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract...... is permitted for personal use only....
Reliability Assessment of Active Distribution System Using Monte Carlo Simulation Method
Directory of Open Access Journals (Sweden)
Shaoyun Ge
2014-01-01
Full Text Available In this paper we have treated the reliability assessment problem of low and high DG penetration level of active distribution system using the Monte Carlo simulation method. The problem is formulated as a two-case program, the program of low penetration simulation and the program of high penetration simulation. The load shedding strategy and the simulation process were introduced in detail during each FMEA process. Results indicate that the integration of DG can improve the reliability of the system if the system was operated actively.
Monte Carlo simulation of image properties of an X-ray intensifying screen
Wang Yi; Wang Kui Lu; Liu Guo Zhi; Liu Ya Qian
2000-01-01
A Monte Carlo simulation program named MCPEP has been developed. Based on the existing simulation program that simulates the transfer of X-ray photons and the secondary electrons, MCPEP also simulates the light photons in the screen. The performances of an intensifying screen (Gd sub 2 O sub 2 S : Tb) with different thickness and different X-ray energies have been analyzed by MCPEP. The calculated light photon probability distribution, average light photon number per absorbed X-ray photon, statistical factor for light emission, X-ray detection efficiency, detective quantum efficiency (DQE) and point spread function (PSF) of the screen are presented.
A virtual source method for Monte Carlo simulation of Gamma Knife Model C
Energy Technology Data Exchange (ETDEWEB)
Kim, Tae Hoon; Kim, Yong Kyun [Hanyang University, Seoul (Korea, Republic of); Chung, Hyun Tai [Seoul National University College of Medicine, Seoul (Korea, Republic of)
2016-05-15
The Monte Carlo simulation method has been used for dosimetry of radiation treatment. Monte Carlo simulation is the method that determines paths and dosimetry of particles using random number. Recently, owing to the ability of fast processing of the computers, it is possible to treat a patient more precisely. However, it is necessary to increase the simulation time to improve the efficiency of accuracy uncertainty. When generating the particles from the cobalt source in a simulation, there are many particles cut off. So it takes time to simulate more accurately. For the efficiency, we generated the virtual source that has the phase space distribution which acquired a single gamma knife channel. We performed the simulation using the virtual sources on the 201 channel and compared the measurement with the simulation using virtual sources and real sources. A virtual source file was generated to reduce the simulation time of a Gamma Knife Model C. Simulations with a virtual source executed about 50 times faster than the original source code and there was no statistically significant difference in simulated results.
PhyloSim - Monte Carlo simulation of sequence evolution in the R statistical computing environment
Directory of Open Access Journals (Sweden)
Massingham Tim
2011-04-01
Full Text Available Abstract Background The Monte Carlo simulation of sequence evolution is routinely used to assess the performance of phylogenetic inference methods and sequence alignment algorithms. Progress in the field of molecular evolution fuels the need for more realistic and hence more complex simulations, adapted to particular situations, yet current software makes unreasonable assumptions such as homogeneous substitution dynamics or a uniform distribution of indels across the simulated sequences. This calls for an extensible simulation framework written in a high-level functional language, offering new functionality and making it easy to incorporate further complexity. Results PhyloSim is an extensible framework for the Monte Carlo simulation of sequence evolution, written in R, using the Gillespie algorithm to integrate the actions of many concurrent processes such as substitutions, insertions and deletions. Uniquely among sequence simulation tools, PhyloSim can simulate arbitrarily complex patterns of rate variation and multiple indel processes, and allows for the incorporation of selective constraints on indel events. User-defined complex patterns of mutation and selection can be easily integrated into simulations, allowing PhyloSim to be adapted to specific needs. Conclusions Close integration with R and the wide range of features implemented offer unmatched flexibility, making it possible to simulate sequence evolution under a wide range of realistic settings. We believe that PhyloSim will be useful to future studies involving simulated alignments.
Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis
Hanson, J. M.; Beard, B. B.
2010-01-01
This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.
Rehman, Fazal-ur; Jamil, K; Zakaullah, M; Abu-Jarad, F; Mujahid, S A
2003-01-01
There are several methods of measuring radon concentrations but nuclear track detector cylindrical dosimeters are widely employed. In this investigation, the consequence of effective volumes of the dosimeters on the registration of alpha tracks in a CR-39 detector was studied. In a series of experiments an optimum radius for a CR-39-based open cylindrical radon dosimeter was found to be about 3 cm. Monte Carlo simulation techniques have been employed to verify the experimental results. In this context, a computer code Monte Carlo simulation dosimetry (MOCSID) was developed. Monte Carlo simulation experiments gave the optimum radius of the dosimeters as 3.0 cm. The experimental results are in good agreement with those obtained by Monte Carlo design calculations. In addition to this, plate-out effects of radon progeny were also studied. It was observed that the contribution of radon progeny (218Po and 214Po) plated-out on the wall of the dosimeters increases with an increase of dosimeter radii and then decrease to 0 at a radius of about 3 cm if a point detector has been installed at the center of the dosimeter base. In the code MOCSID different types of random number generators were employed. The results of this research are very useful for designing an optimum size of radon dosimeters.
Energy Technology Data Exchange (ETDEWEB)
Rehman, Fazal-ur- E-mail: fazalr@kfupm.edu.sa; Jamil, K.; Zakaullah, M.; Abu-Jarad, F.; Mujahid, S.A
2003-07-01
There are several methods of measuring radon concentrations but nuclear track detector cylindrical dosimeters are widely employed. In this investigation, the consequence of effective volumes of the dosimeters on the registration of alpha tracks in a CR-39 detector was studied. In a series of experiments an optimum radius for a CR-39-based open cylindrical radon dosimeter was found to be about 3 cm. Monte Carlo simulation techniques hav been employed to verify the experimental results. In this context, a computer code Monte Carlo simulation dosimetry (MOCSID) was developed. Monte Carlo simulation experiments gave the optimum radius of the dosimeters as 3.0 cm. The experimental results are in good agreement with those obtained by Monte Carlo design calculations. In addition to this, plate-out effects of radon progeny were also studied. It was observed that the contribution of radon progeny ({sup 218}Po and {sup 214}Po) plated-out on the wall of the dosimeters increases with an increase of dosimeter radii and then decrease to 0 at a radius of about 3 cm if a point detector has been installed at the center of the dosimeter base. In the code MOCSID different types of random number generators were employed. The results of this research are very useful for designing an optimum size of radon dosimeters.
Monte Carlo simulation of the ELIMED beamline using Geant4
Czech Academy of Sciences Publication Activity Database
Pipek, J.; Romano, F.; Milluzzo, G.; Cirrone, G.A.P.; Cuttone, G.; Amico, A.G.; Margarone, Daniele; Larosa, G.; Leanza, R.; Petringa, G.; Schillaci, Francesco; Scuderi, Valentina
2017-01-01
Roč. 12, Mar (2017), s. 1-5, č. článku C03027. ISSN 1748-0221 R&D Projects: GA MŠk EF15_008/0000162; GA MŠk LQ1606 Grant - others:ELI Beamlines(XE) CZ.02.1.01/0.0/0.0/15_008/0000162 Institutional support: RVO:68378271 Keywords : models and simulations * accelerator applications * beam dynamics * software architectures * event data models * frameworks and databases Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.220, year: 2016
The Simulation-Tabulation Method for Classical Diffusion Monte Carlo
Hwang, Chi-Ok; Given, James A.; Mascagni, Michael
2001-12-01
Many important classes of problems in materials science and biotechnology require the solution of the Laplace or Poisson equation in disordered two-phase domains in which the phase interface is extensive and convoluted. Green's function first-passage (GFFP) methods solve such problems efficiently by generalizing the “walk on spheres” (WOS) method to allow first-passage (FP) domains to be not just spheres but a wide variety of geometrical shapes. (In particular, this solves the difficulty of slow convergence with WOS by allowing FP domains that contain patches of the phase interface.) Previous studies accomplished this by using geometries for which the Green's function was available in quasi-analytic form. Here, we extend these studies by using the simulation-tabulation (ST) method. We simulate and then tabulate surface Green's functions that cannot be obtained analytically. The ST method is applied to the Solc-Stockmayer model with zero potential, to the mean trapping rate of a diffusing particle in a domain of nonoverlapping spherical traps, and to the effective conductivity for perfectly insulating, nonoverlapping spherical inclusions in a matrix of finite conductivity. In all cases, this class of algorithms provides the most efficient methods known to solve these problems to high accuracy.
Beam steering uncertainty analysis for Risley prisms based on Monte Carlo simulation
Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen
2017-01-01
The Risley-prism system is applied in imaging LADAR to achieve precision directing of laser beams. The image quality of LADAR is affected deeply by the laser beam steering quality of Risley prisms. The ray-tracing method was used to predict the pointing error. The beam steering uncertainty of Risley prisms was investigated through Monte Carlo simulation under the effects of rotation axis jitter and prism rotation error. Case examples were given to elucidate the probability distribution of pointing error. Furthermore, the effect of scan pattern on the beam steering uncertainty was also studied. It is found that the demand for the bearing rotational accuracy of the second prism is much more stringent than that of the first prism. Under the effect of rotation axis jitter, the pointing uncertainty in the field of regard is related to the altitude angle of the emerging beam, but it has no relationship with the azimuth angle. The beam steering uncertainty will be affected by the original phase if the scan pattern is a circle. The proposed method can be used to estimate the beam steering uncertainty of Risley prisms, and the conclusions will be helpful in the design and manufacture of this system.
Energy Technology Data Exchange (ETDEWEB)
Quesada-Pérez, Manuel; Maroto-Centeno, José Alberto [Departamento de Física, Escuela Politécnica Superior de Linares, Universidad de Jaén, 23700 Linares, Jaén (Spain); Adroher-Benítez, Irene [Grupo de Física de Fluidos y Biocoloides, Departamento de Física Aplicada, Facultad de Ciencias, Universidad de Granada, 18071 Granada (Spain)
2014-05-28
In this work, the size-exclusion partitioning of neutral solutes in crosslinked polymer networks has been studied through Monte Carlo simulations. Two models that provide user-friendly expressions to predict the partition coefficient have been tested over a wide range of volume fractions: Ogston's model (especially devised for fibrous media) and the pore model. The effects of crosslinking and bond stiffness have also been analyzed. Our results suggest that the fiber model can acceptably account for size-exclusion effects in crosslinked gels. Its predictions are good for large solutes if the fiber diameter is assumed to be the effective monomer diameter. For solutes sizes comparable to the monomer dimensions, a smaller fiber diameter must be used. Regarding the pore model, the partition coefficient is poorly predicted when the pore diameter is estimated as the distance between adjacent crosslinker molecules. On the other hand, our results prove that the pore sizes obtained from the pore model by fitting partitioning data of swollen gels are overestimated.
Díaz, Oliver; García, Eloy; Oliver, Arnau; Martí, Joan; Martí, Robert
2017-03-01
Scattered radiation is an undesired signal largely present in most digital breast tomosynthesis (DBT) projection images as no physically rejection methods, i.e. anti-scatter grids, are regularly employed, in contrast to full- field digital mammography. This scatter signal might reduce the visibility of small objects in the image, and potentially affect the detection of small breast lesions. Thus accurate scatter models are needed to minimise the scattered radiation signal via post-processing algorithms. All prior work on scattered radiation estimation has assumed a rigid breast compression paddle (RP) and reported large contribution of scatter signal from RP in the detector. However, in this work, flexible paddles (FPs) tilting from 0° to 10° will be studied using Monte Carlo simulations to analyse if the scatter distribution differs from RP geometries. After reproducing the Hologic Selenia Dimensions geometry (narrow angle) with two (homogeneous and heterogeneous) compressed breast phantoms, results illustrate that the scatter distribution recorded at the detector varies up to 22% between RP and FP geometries (depending on the location), mainly due to the decrease in thickness of the breast observed for FP. However, the relative contribution from the paddle itself (3-12% of the total scatter) remains approximately unchanged for both setups and their magnitude depends on the distance to the breast edge.
OptogenSIM: a 3D Monte Carlo simulation platform for light delivery design in optogenetics.
Liu, Yuming; Jacques, Steven L; Azimipour, Mehdi; Rogers, Jeremy D; Pashaie, Ramin; Eliceiri, Kevin W
2015-12-01
Optimizing light delivery for optogenetics is critical in order to accurately stimulate the neurons of interest while reducing nonspecific effects such as tissue heating or photodamage. Light distribution is typically predicted using the assumption of tissue homogeneity, which oversimplifies light transport in heterogeneous brain. Here, we present an open-source 3D simulation platform, OptogenSIM, which eliminates this assumption. This platform integrates a voxel-based 3D Monte Carlo model, generic optical property models of brain tissues, and a well-defined 3D mouse brain tissue atlas. The application of this platform in brain data models demonstrates that brain heterogeneity has moderate to significant impact depending on application conditions. Estimated light density contours can show the region of any specified power density in the 3D brain space and thus can help optimize the light delivery settings, such as the optical fiber position, fiber diameter, fiber numerical aperture, light wavelength and power. OptogenSIM is freely available and can be easily adapted to incorporate additional brain atlases.
Directory of Open Access Journals (Sweden)
GHAREHPETIAN, G. B.
2009-06-01
Full Text Available The analysis of the risk of partial and total blackouts has a crucial role to determine safe limits in power system design, operation and upgrade. Due to huge cost of blackouts, it is very important to improve risk assessment methods. In this paper, Monte Carlo simulation (MCS was used to analyze the risk and Gaussian Mixture Method (GMM has been used to estimate the probability density function (PDF of the load curtailment, in order to improve the power system risk assessment method. In this improved method, PDF and a suggested index have been used to analyze the risk of loss of load. The effect of considering the number of generation units of power plants in the risk analysis has been studied too. The improved risk assessment method has been applied to IEEE 118 bus and the network of Khorasan Regional Electric Company (KREC and the PDF of the load curtailment has been determined for both systems. The effect of various network loadings, transmission unavailability, transmission capacity and generation unavailability conditions on blackout risk has been investigated too.
Hueser, J. E.; Brock, F. J.; Melfi, L. T., Jr.; Bird, G. A.
1984-01-01
A new solution procedure has been developed to analyze the flowfield properties in the vicinity of the Inertial Upper Stage/Spacecraft during the 1st stage (SRMI) burn. Continuum methods are used to compute the nozzle flow and the exhaust plume flowfield as far as the boundary where the breakdown of translational equilibrium leaves these methods invalid. The Direct Simulation Monte Carlo (DSMC) method is applied everywhere beyond this breakdown boundary. The flowfield distributions of density, velocity, temperature, relative abundance, surface flux density, and pressure are discussed for each species for 2 sets of boundary conditions: vacuum and freestream. The interaction of the exhaust plume and the freestream with the spacecraft and the 2-stream direct interaction are discussed. The results show that the low density, high velocity, counter flowing free-stream substantially modifies the flowfield properties and the flux density incident on the spacecraft. A freestream bow shock is observed in the data, located forward of the high density region of the exhaust plume into which the freestream gas does not penetrate. The total flux density incident on the spacecraft, integrated over the SRM1 burn interval is estimated to be of the order of 10 to the 22nd per sq m (about 1000 atomic layers).
MULTILEVEL MONTE CARLO (MLMC) SIMULATIONS: PERFORMANCE RESULTS FOR SPE10 (XY SLICES)
Energy Technology Data Exchange (ETDEWEB)
Kalchev, Delyan [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vassilevski, Panayot S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-02-26
In this report we first describe a generic multilevel Monte Carlo method and then illustrate its superior performance over a traditional single-level Monte Carlo method for second order elliptic PDEs corresponding to two-dimensional layers in (x, y)-direction of the Tenth SPE Comparative Solution project (SPE 10) which gives high-contrast permeability coefficients. The SPE10 data set is used as a coarse level in the Monte Carlo method and the respective permeability coefficient k (provided in the SPE10 dataset) is used as a mean in the simulation. The actual coefficients are drawn based on a KL-expansion assuming that the log-mean is perturbed by a log-normal distributed samples.
Hatch, Harold W.; Jiao, Sally; Mahynski, Nathan A.; Blanco, Marco A.; Shen, Vincent K.
2017-12-01
Virial coefficients are predicted over a large range of both temperatures and model parameter values (i.e., alchemical transformation) from an individual Mayer-sampling Monte Carlo simulation by statistical mechanical extrapolation with minimal increase in computational cost. With this extrapolation method, a Mayer-sampling Monte Carlo simulation of the SPC/E (extended simple point charge) water model quantitatively predicted the second virial coefficient as a continuous function spanning over four orders of magnitude in value and over three orders of magnitude in temperature with less than a 2% deviation. In addition, the same simulation predicted the second virial coefficient if the site charges were scaled by a constant factor, from an increase of 40% down to zero charge. This method is also shown to perform well for the third virial coefficient and the exponential parameter for a Lennard-Jones fluid.
Kerr, Rex A; Bartol, Thomas M; Kaminsky, Boris; Dittrich, Markus; Chang, Jen-Chien Jack; Baden, Scott B; Sejnowski, Terrence J; Stiles, Joel R
2008-10-13
Many important physiological processes operate at time and space scales far beyond those accessible to atom-realistic simulations, and yet discrete stochastic rather than continuum methods may best represent finite numbers of molecules interacting in complex cellular spaces. We describe and validate new tools and algorithms developed for a new version of the MCell simulation program (MCell3), which supports generalized Monte Carlo modeling of diffusion and chemical reaction in solution, on surfaces representing membranes, and combinations thereof. A new syntax for describing the spatial directionality of surface reactions is introduced, along with optimizations and algorithms that can substantially reduce computational costs (e.g., event scheduling, variable time and space steps). Examples for simple reactions in simple spaces are validated by comparison to analytic solutions. Thus we show how spatially realistic Monte Carlo simulations of biological systems can be far more cost-effective than often is assumed, and provide a level of accuracy and insight beyond that of continuum methods.
Directory of Open Access Journals (Sweden)
Cahyorini Kusumawardani
2010-06-01
Full Text Available A Monte Carlo simulation was performed for Co2+ in 18.6 % aqueous ammonia solution at a temperature of 293.16 K, using ab initio pair potentials and three-body potentials for Co-H2O-H2O, Co-NH3-NH3 and Co-H2O-NH3 interactions. The first solvation shell consists average of 2.9 water and 3.2 ammonia molecules, and the second shell of 10.4 water and 11.2 ammonia molecules. The structure of the solvated ion is discussed in terms of radial distribution functions, angular distributions and coordination number. Keywords: Molecular simulation, Monte Carlo simulation, solvation, ab initio
Integrated Building Energy Design of a Danish Office Building Based on Monte Carlo Simulation Method
DEFF Research Database (Denmark)
Sørensen, Mathias Juul; Myhre, Sindre Hammer; Hansen, Kasper Kingo
2017-01-01
and improve the collaboration efficiency. Monte Carlo Simulation method is adopted to simulate both the energy performance and indoor climate of the building. Building physics parameters, including characteristics of facades, walls, windows, etc., are taken into consideration, and thousands of combinations...... fulfil the requirements and leaves additional design freedom for the architects. This study utilizes global design exploration with Monte Carlo Simulations, in order to form feasible solutions for architects and improves the collaboration efficiency between architects and engineers....... office building located in Aarhus, Denmark. Building geometry, floor plans and employee schedules were obtained from the architects which is the basis for this study. This study aims to simplify the iterative design process that is based on the traditional trial and error method in the late design phases...
Yasuda, Shugo
2015-01-01
A Monte Carlo simulation for the chemotactic bacteria is developed on the basis of the kinetic modeling, i.e., the Boltzmann transport equation, and applied to the one-dimensional traveling population wave in a micro channel.In this method, the Monte Carlo method, which calculates the run-and-tumble motions of bacteria, is coupled with a finite volume method to solve the macroscopic transport of the chemical cues in the field. The simulation method can successfully reproduce the traveling population wave of bacteria which was observed experimentally. The microscopic dynamics of bacteria, e.g., the velocity autocorrelation function and velocity distribution function of bacteria, are also investigated. It is found that the bacteria which form the traveling population wave create quasi-periodic motions as well as a migratory movement along with the traveling population wave. Simulations are also performed with changing the sensitivity and modulation parameters in the response function of bacteria. It is found th...
Monte Carlo simulation of MOSFET dosimeter for electron backscatter using the GEANT4 code.
Chow, James C L; Leung, Michael K K
2008-06-01
The aim of this study is to investigate the influence of the body of the metal-oxide-semiconductor field effect transistor (MOSFET) dosimeter in measuring the electron backscatter from lead. The electron backscatter factor (EBF), which is defined as the ratio of dose at the tissue-lead interface to the dose at the same point without the presence of backscatter, was calculated by the Monte Carlo simulation using the GEANT4 code. Electron beams with energies of 4, 6, 9, and 12 MeV were used in the simulation. It was found that in the presence of the MOSFET body, the EBFs were underestimated by about 2%-0.9% for electron beam energies of 4-12 MeV, respectively. The trend of the decrease of EBF with an increase of electron energy can be explained by the small MOSFET dosimeter, mainly made of epoxy and silicon, not only attenuated the electron fluence of the electron beam from upstream, but also the electron backscatter generated by the lead underneath the dosimeter. However, this variation of the EBF underestimation is within the same order of the statistical uncertainties as the Monte Carlo simulations, which ranged from 1.3% to 0.8% for the electron energies of 4-12 MeV, due to the small dosimetric volume. Such small EBF deviation is therefore insignificant when the uncertainty of the Monte Carlo simulation is taken into account. Corresponding measurements were carried out and uncertainties compared to Monte Carlo results were within +/- 2%. Spectra of energy deposited by the backscattered electrons in dosimetric volumes with and without the lead and MOSFET were determined by Monte Carlo simulations. It was found that in both cases, when the MOSFET body is either present or absent in the simulation, deviations of electron energy spectra with and without the lead decrease with an increase of the electron beam energy. Moreover, the softer spectrum of the backscattered electron when lead is present can result in a reduction of the MOSFET response due to stronger
Energy Technology Data Exchange (ETDEWEB)
Thiam, Ch.O
2007-10-15
Accurate radiotherapy treatment requires the delivery of a precise dose to the tumour volume and a good knowledge of the dose deposit to the neighbouring zones. Computation of the treatments is usually carried out by a Treatment Planning System (T.P.S.) which needs to be precise and fast. The G.A.T.E. platform for Monte-Carlo simulation based on G.E.A.N.T.4 is an emerging tool for nuclear medicine application that provides functionalities for fast and reliable dosimetric calculations. In this thesis, we studied in parallel a validation of the G.A.T.E. platform for the modelling of electrons and photons low energy sources and the optimized use of grid infrastructures to reduce simulations computing time. G.A.T.E. was validated for the dose calculation of point kernels for mono-energetic electrons and compared with the results of other Monte-Carlo studies. A detailed study was made on the energy deposit during electrons transport in G.E.A.N.T.4. In order to validate G.A.T.E. for very low energy photons (<35 keV), three models of radioactive sources used in brachytherapy and containing iodine 125 (2301 of Best Medical International; Symmetra of Uro- Med/Bebig and 6711 of Amersham) were simulated. Our results were analyzed according to the recommendations of task group No43 of American Association of Physicists in Medicine (A.A.P.M.). They show a good agreement between G.A.T.E., the reference studies and A.A.P.M. recommended values. The use of Monte-Carlo simulations for a better definition of the dose deposited in the tumour volumes requires long computing time. In order to reduce it, we exploited E.G.E.E. grid infrastructure where simulations are distributed using innovative technologies taking into account the grid status. Time necessary for the computing of a radiotherapy planning simulation using electrons was reduced by a factor 30. A Web platform based on G.E.N.I.U.S. portal was developed to make easily available all the methods to submit and manage G
Optimizing Noble Gas-Water Interactions via Monte Carlo Simulations.
Warr, Oliver; Ballentine, Chris J; Mu, Junju; Masters, Andrew
2015-11-12
In this work we present optimized noble gas-water Lennard-Jones 6-12 pair potentials for each noble gas. Given the significantly different atomic nature of water and the noble gases, the standard Lorentz-Berthelot mixing rules produce inaccurate unlike molecular interactions between these two species. Consequently, we find simulated Henry's coefficients deviate significantly from their experimental counterparts for the investigated thermodynamic range (293-353 K at 1 and 10 atm), due to a poor unlike potential well term (εij). Where εij is too high or low, so too is the strength of the resultant noble gas-water interaction. This observed inadequacy in using the Lorentz-Berthelot mixing rules is countered in this work by scaling εij for helium, neon, argon, and krypton by factors of 0.91, 0.8, 1.1, and 1.05, respectively, to reach a much improved agreement with experimental Henry's coefficients. Due to the highly sensitive nature of the xenon εij term, coupled with the reasonable agreement of the initial values, no scaling factor is applied for this noble gas. These resulting optimized pair potentials also accurately predict partitioning within a CO2-H2O binary phase system as well as diffusion coefficients in ambient water. This further supports the quality of these interaction potentials. Consequently, they can now form a well-grounded basis for the future molecular modeling of multiphase geological systems.
Energy Technology Data Exchange (ETDEWEB)
Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)
2013-07-01
This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.
Neutron spectrum unfolding using genetic algorithm in a Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Suman, Vitisha [Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Sarkar, P.K., E-mail: pksarkar02@gmail.com [Manipal Centre for Natural Sciences, Manipal University, Manipal 576104 (India)
2014-02-11
A spectrum unfolding technique GAMCD (Genetic Algorithm and Monte Carlo based spectrum Deconvolution) has been developed using the genetic algorithm methodology within the framework of Monte Carlo simulations. Each Monte Carlo history starts with initial solution vectors (population) as randomly generated points in the hyper dimensional solution space that are related to the measured data by the response matrix of the detection system. The transition of the solution points in the solution space from one generation to another are governed by the genetic algorithm methodology using the techniques of cross-over (mating) and mutation in a probabilistic manner adding new solution points to the population. The population size is kept constant by discarding solutions having lesser fitness values (larger differences between measured and calculated results). Solutions having the highest fitness value at the end of each Monte Carlo history are averaged over all histories to obtain the final spectral solution. The present method shows promising results in neutron spectrum unfolding for both under-determined and over-determined problems with simulated test data as well as measured data when compared with some existing unfolding codes. An attractive advantage of the present method is the independence of the final spectra from the initial guess spectra.
A Monte Carlo simulation of the packing and segregation of spheres in cylinders
Directory of Open Access Journals (Sweden)
C. R. A. ABREU
1999-12-01
Full Text Available In this work, the Monte Carlo method (MC was extended to simulate the packing and segregation of particles subjected to a gravitational field and confined inside rigid walls. The method was used in systems containing spheres inside cylinders. The calculation of void fraction profiles in both the axial and radial directions was formulated, and some results are presented. In agreement with experimental data, the simulations show that the packed beds present structural ordering near the cylindrical walls up to a distance of about 4 particle diameters. The simulations also indicate that the presence of the cylindrical wall does not seem to have a strong effect on the gravitational segregation phenomenon.
A brief history of the introduction of generalized ensembles to Markov chain Monte Carlo simulations
Berg, Bernd A.
2017-03-01
The most efficient weights for Markov chain Monte Carlo calculations of physical observables are not necessarily those of the canonical ensemble. Generalized ensembles, which do not exist in nature but can be simulated on computers, lead often to a much faster convergence. In particular, they have been used for simulations of first order phase transitions and for simulations of complex systems in which conflicting constraints lead to a rugged free energy landscape. Starting off with the Metropolis algorithm and Hastings' extension, I present a minireview which focuses on the explosive use of generalized ensembles in the early 1990s. Illustrations are given, which range from spin models to peptides.
Gan, Zecheng
2013-01-01
Computer simulation with Monte Carlo is an important tool to investigate the function and equilibrium properties of many systems with biological and soft matter materials solvable in solvents. The appropriate treatment of long-range electrostatic interaction is essential for these charged systems, but remains a challenging problem for large-scale simulations. We have developed an efficient Barnes-Hut treecode algorithm for electrostatic evaluation in Monte Carlo simulations of Coulomb many-body systems. The algorithm is based on a divide-and-conquer strategy and fast update of the octree data structure in each trial move through a local adjustment procedure. We test the accuracy of the tree algorithm, and use it to computer simulations of electric double layer near a spherical interface. It has been shown that the computational cost of the Monte Carlo method with treecode acceleration scales as $\\log N$ in each move. For a typical system with ten thousand particles, by using the new algorithm, the speed has b...
Wu, Di M.; Zhao, S. S.; Lu, Jun Q.; Hu, Xin-Hua
2000-06-01
In Monte Carlo simulations of light propagating in biological tissues, photons propagating in the media are described as classic particles being scattered and absorbed randomly in the media, and their path are tracked individually. To obtain any statistically significant results, however, a large number of photons is needed in the simulations and the calculations are time consuming and sometime impossible with existing computing resource, especially when considering the inhomogeneous boundary conditions. To overcome this difficulty, we have implemented a parallel computing technique into our Monte Carlo simulations. And this moment is well justified due to the nature of the Monte Carlo simulation. Utilizing the PVM (Parallel Virtual Machine, a parallel computing software package), parallel codes in both C and Fortran have been developed on the massive parallel computer of Cray T3E and a local PC-network running Unix/Sun Solaris. Our results show that parallel computing can significantly reduce the running time and make efficient usage of low cost personal computers. In this report, we present a numerical study of light propagation in a slab phantom of skin tissue using the parallel computing technique.
Risk assessment predictions of open dumping area after closure using Monte Carlo simulation
Pauzi, Nur Irfah Mohd; Radhi, Mohd Shahril Mat; Omar, Husaini
2017-10-01
Currently, there are many abandoned open dumping areas that were left without any proper mitigation measures. These open dumping areas could pose serious hazard to human and pollute the environment. The objective of this paper is to determine the risk assessment at the open dumping area after they has been closed using Monte Carlo Simulation method. The risk assessment exercise is conducted at the Kuala Lumpur dumping area. The rapid urbanisation of Kuala Lumpur coupled with increase in population lead to increase in waste generation. It leads to more dumping/landfill area in Kuala Lumpur. The first stage of this study involve the assessment of the dumping area and samples collections. It followed by measurement of settlement of dumping area using oedometer. The risk of the settlement is predicted using Monte Carlo simulation method. Monte Carlo simulation calculates the risk and the long-term settlement. The model simulation result shows that risk level of the Kuala Lumpur open dumping area ranges between Level III to Level IV i.e. between medium risk to high risk. These settlement (ΔH) is between 3 meters to 7 meters. Since the risk is between medium to high, it requires mitigation measures such as replacing the top waste soil with new sandy gravel soil. This will increase the strength of the soil and reduce the settlement.
Exploring fluctuations and phase equilibria in fluid mixtures via Monte Carlo simulation
Denton, Alan R.; Schmidt, Michael P.
2013-03-01
Monte Carlo simulation provides a powerful tool for understanding and exploring thermodynamic phase equilibria in many-particle interacting systems. Among the most physically intuitive simulation methods is Gibbs ensemble Monte Carlo (GEMC), which allows direct computation of phase coexistence curves of model fluids by assigning each phase to its own simulation cell. When one or both of the phases can be modelled virtually via an analytic free energy function (Mehta and Kofke 1993 Mol. Phys. 79 39), the GEMC method takes on new pedagogical significance as an efficient means of analysing fluctuations and illuminating the statistical foundation of phase behaviour in finite systems. Here we extend this virtual GEMC method to binary fluid mixtures and demonstrate its implementation and instructional value with two applications: (1) a lattice model of simple mixtures and polymer blends and (2) a free-volume model of a complex mixture of colloids and polymers. We present algorithms for performing Monte Carlo trial moves in the virtual Gibbs ensemble, validate the method by computing fluid demixing phase diagrams, and analyse the dependence of fluctuations on system size. Our open-source simulation programs, coded in the platform-independent Java language, are suitable for use in classroom, tutorial, or computational laboratory settings.
Meng, Xiangcui; Wang, Shangxu; Tang, Genyang; Li, Jingnan; Sun, Chao
2017-06-01
Coda waves are usually regarded as noise in the conventional seismic exploration fields. Our work is to use the energy of coda waves to estimate the stochastic parameters of random media, which is necessary to characterize the subsurface reservoir and assess the oil or gas total volume in the heterogeneous reservoir. In this paper, we briefly present the Monte Carlo radiative transfer (MCRT) theory in acoustic media, which is often used to model the envelopes of seismic energy in approximated random media in seismology. Then, we estimate the fluctuation strength and correlation length in 2D acoustic heterogeneous media based on the MCRT simulation from the synthetic crosswell seismic data. Our results show that sufficient energy information at a range of offsets can alleviate the nonunicity of the inversion result. In order to properly balance the energy effect of direct waves and coda waves in the inversion process, we modify the objective function to compare the logarithm values of the RT envelopes and of the envelopes computed with the finite difference method. Revision of this objective function makes the inversion result more accurate and more stable. Even when there is strong noise in the envelopes of seismic data, the modified equation tends to estimate the correct values. Moreover, the estimated results of the correlation length and fluctuation strength are influenced by the type of random model used in the MCRT simulation. It is better to choose the type of random media matching the investigated medium, when we apply the MCRT simulation to estimate the stochastic parameters of the investigated medium.
Energy Technology Data Exchange (ETDEWEB)
Mendes, Hitalo R.; Tomal, Alessandra [Universidade Estadual de Campinas (UNICAMP), Campinas, SP (Brazil). Instituto de Fisica Gleb Wataghin
2016-07-01
The dosimetry in pediatric radiology is essential due to the higher risk that children have in comparison to adults. The focus of this study is to present how the dose varies depending on the depth in a 10 year old and a newborn, for this purpose simulations are made using the Monte Carlo method. Potential differences were considered 70 and 90 kVp for the 10 year old and 70 and 80 kVp for the newborn. The results show that in both cases, the dose at the skin surface is larger for smaller potential value, however, it decreases faster for larger potential values. Another observation made is that because the newborn is less thick the ratio between the initial dose and the final is lower compared to the case of a 10 year old, showing that it is possible to make an image using a smaller entrance dose in the skin, keeping the same level of exposure at the detector. (author)
Energy Technology Data Exchange (ETDEWEB)
Schoof, Tim
2017-03-08
The reliable quantum mechanical description of thermodynamic properties of fermionic many-body systems at high densities and strong degeneracy is of increasing interest due to recent experimental progress in generating systems that exhibit a non-trivial interplay of quantum, temperature, and coupling effects. While quantum Monte Carlo methods are among the most accurate approaches for the description of the ground state, finite-temperature path integral Monte Carlo (PIMC) simulations cannot correctly describe weakly to moderately coupled and strongly degenerate Fermi systems due to the so-called fermion sign problem. By switching from the coordinate representation to a basis of anti-symmetric Slater-determinants, the Configuration Path Integral Monte Carlo (CPIMC) approach greatly reduces the sign problem and allows for the exact computation of thermodynamic properties in this regime. During this work, the CPIMC algorithm was greatly improved in terms of efficiency and accessible observables. The first successful implementation of the diagrammatic worm algorithm for a general Hamiltonian in Fock space with arbitrary pair interactions gives direct access to the Matsubara Green function. This allows for the reconstruction of dynamic properties from simulations in thermodynamic equilibrium and significantly reduces the statistical variance of derived estimators, such as the one-particle density. The strongly improved MC sampling, the much more efficient calculation of update probabilities, and the successful parallelization to thousands of CPU cores, which have been achieved as part of the new implementation, are essential for the subsequent application of the method to much larger systems than in previous works. This thesis demonstrates the capabilities of the CPIMC approach for a model system of Coulomb interacting fermions in a two-dimensional harmonic trap. The correctness of the CPIMC implementation is verified by rigorous comparisons with an exact
Levien, Ethan; Bressloff, Paul C.
2017-10-01
Many biochemical systems appearing in applications have a multiscale structure so that they converge to piecewise deterministic Markov processes in a thermodynamic limit. The statistics of the piecewise deterministic process can be obtained much more efficiently than those of the exact process. We explore the possibility of coupling sample paths of the exact model to the piecewise deterministic process in order to reduce the variance of their difference. We then apply this coupling to reduce the computational complexity of a Monte Carlo estimator. Motivated by the rigorous results in [1], we show how this method can be applied to realistic biological models with nontrivial scalings.
Energy Technology Data Exchange (ETDEWEB)
Oramas Polo, I.
2014-07-01
This paper presents the simulation of the gamma camera Park Isocam II by Monte Carlo code SIMIND. This simulation allows detailed assessment of the functioning of the gamma camera. The parameters evaluated by means of the simulation are: the intrinsic uniformity with different window amplitudes, the system uniformity, the extrinsic spatial resolution, the maximum rate of counts, the intrinsic sensitivity, the system sensitivity, the energy resolution and the pixel size. The results of the simulation are compared and evaluated against the specifications of the manufacturer of the gamma camera and taking into account the National Protocol for Quality Control of Nuclear Medicine Instruments of the Cuban Medical Equipment Control Center. The simulation reported here demonstrates the validity of the SIMIND Monte Carlo code to evaluate the performance of the gamma camera Park Isocam II and as result a computational model of the camera has been obtained. (Author)
Prediction of beam hardening artefacts in computed tomography using Monte Carlo simulations
DEFF Research Database (Denmark)
Thomsen, M.; Bergbäck Knudsen, Erik; Willendrup, Peter Kjær
2015-01-01
We show how radiological images of both single and multi material samples can be simulated using the Monte Carlo simulation tool McXtrace and how these images can be used to make a three dimensional reconstruction. Good numerical agreement between the X-ray attenuation coefficient in experimental...... and simulated data can be obtained, which allows us to use simulated projections in the linearisation procedure for single material samples and in that way reduce beam hardening artefacts. The simulations can be used to predict beam hardening artefacts in multi material samples with complex geometry......, illustrated with an example. Linearisation requires knowledge about the X-ray transmission at varying sample thickness, but in some cases homogeneous calibration phantoms are hard to manufacture, which affects the accuracy of the calibration. Using simulated data overcomes the manufacturing problems...
OWL: A scalable Monte Carlo simulation suite for finite-temperature study of materials
Li, Ying Wai; Yuk, Simuck F.; Cooper, Valentino R.; Eisenbach, Markus; Odbadrakh, Khorgolkhuu
The OWL suite is a simulation package for performing large-scale Monte Carlo simulations. Its object-oriented, modular design enables it to interface with various external packages for energy evaluations. It is therefore applicable to study the finite-temperature properties for a wide range of systems: from simple classical spin models to materials where the energy is evaluated by ab initio methods. This scheme not only allows for the study of thermodynamic properties based on first-principles statistical mechanics, it also provides a means for massive, multi-level parallelism to fully exploit the capacity of modern heterogeneous computer architectures. We will demonstrate how improved strong and weak scaling is achieved by employing novel, parallel and scalable Monte Carlo algorithms, as well as the applications of OWL to a few selected frontier materials research problems. This research was supported by the Office of Science of the Department of Energy under contract DE-AC05-00OR22725.
Exact pseudofermion action for Monte Carlo simulation of domain-wall fermion
Directory of Open Access Journals (Sweden)
Yu-Chih Chen
2014-11-01
Full Text Available We present an exact pseudofermion action for hybrid Monte Carlo simulation (HMC of one-flavor domain-wall fermion (DWF, with the effective 4-dimensional Dirac operator equal to the optimal rational approximation of the overlap-Dirac operator with kernel H=cHw(1+dγ5Hw−1, where c and d are constants. Using this exact pseudofermion action, we perform HMC of one-flavor QCD, and compare its characteristics with the widely used rational hybrid Monte Carlo algorithm (RHMC. Moreover, to demonstrate the practicality of the exact one-flavor algorithm (EOFA, we perform the first dynamical simulation of the (1+1-flavors QCD with DWF.
Direct Measurement of Power Dissipated by Monte Carlo Simulations on CPU and FPGA Platforms
DEFF Research Database (Denmark)
Albicocco, Pietro; Papini, Davide; Nannarelli, Alberto
In this technical report, we describe how power dissipation measurements on different computing platforms (a desktop computer and an FPGA board) are performed by using a Hall effectbased current sensor. The chosen application is a Monte Carlo simulation for European option pricing which is a popu......In this technical report, we describe how power dissipation measurements on different computing platforms (a desktop computer and an FPGA board) are performed by using a Hall effectbased current sensor. The chosen application is a Monte Carlo simulation for European option pricing which...... is a popular algorithm used in financial computations. The Hall effect probe measurements complement the measurements performed on the core of the FPGA by a built-in Xilinx power monitoring system....
Energy Technology Data Exchange (ETDEWEB)
Villafan-Vidales, H.I.; Arancibia-Bulnes, C.A.; Dehesa-Carrasco, U. [Centro de Investigacion en Energia, Universidad Nacional Autonoma de Mexico, Privada Xochicalco s/n, Col. Centro, A.P. 34, Temixco, Morelos 62580 (Mexico); Romero-Paredes, H. [Departamento de Ingenieria de Procesos e Hidraulica, Universidad Autonoma Metropolitana-Iztapalapa, Av. San Rafael Atlixco No.186, Col. Vicentina, A.P. 55-534, Mexico D.F 09340 (Mexico)
2009-01-15
Radiative heat transfer in a solar thermochemical reactor for the thermal reduction of cerium oxide is simulated with the Monte Carlo method. The directional characteristics and the power distribution of the concentrated solar radiation that enters the cavity is obtained by carrying out a Monte Carlo ray tracing of a paraboloidal concentrator. It is considered that the reactor contains a gas/particle suspension directly exposed to concentrated solar radiation. The suspension is treated as a non-isothermal, non-gray, absorbing, emitting, and anisotropically scattering medium. The transport coefficients of the particles are obtained from Mie-scattering theory by using the optical properties of cerium oxide. From the simulations, the aperture radius and the particle concentration were optimized to match the characteristics of the considered concentrator. (author)
Directory of Open Access Journals (Sweden)
N Heidarloo
2017-08-01
Full Text Available Intraoperative electron radiotherapy is one of the radiotherapy methods that delivers a high single fraction of radiation dose to the patient in one session during the surgery. Beam shaper applicator is one of the applicators that is recently employed with this radiotherapy method. This applicator has a considerable application in treatment of large tumors. In this study, the dosimetric characteristics of the electron beam produced by LIAC intraoperative radiotherapy accelerator in conjunction with this applicator have been evaluated through Monte Carlo simulation by MCNP code. The results showed that the electron beam produced by the beam shaper applicator would have the desirable dosimetric characteristics, so that the mentioned applicator can be considered for clinical purposes. Furthermore, the good agreement between the results of simulation and practical dosimetry, confirms the applicability of Monte Carlo method in determining the dosimetric parameters of electron beam intraoperative radiotherapy
Application of Monte Carlo Simulation of Total Skin Electron Therapy for Treatment Optimization.
Shokrani, Parvaneh
A technique for treatment of total skin irradiation was simulated using the Monte Carlo method. By application of this simulation, optimization of the Total Skin Electron Therapy (TSET) technique was accomplished. The purpose of the optimization process was to select the properties of the optimal TSET secondary scatterer. The optimization process was divided into four steps. First, the geometry of the treatment head of a Phillips SL-20 linear accelerator and the geometry of the TSET technique were simulated. Using a combination of the EGS4 Monte Carlo code system, geometry routines, and a package of variance reduction techniques, spectrum of the electron beam, at the exit window of the treatment head and at the treatment plane (located at 300 cm), was calculated. Second, to confirm the accuracy of calculations, the calculated depth dose curves, field uniformity and bremsstrahlung contamination for a control scatterer were compared to the measurement values. Third, a group of materials were selected to perform as a candidate for the optimal TSET scatterer. The treatment field characteristics produced by these materials were calculated and the optimal scatterer was selected. Forth, selection of the optimal scatterer was confirmed by way of physical measurements. Physical measurements showed that the EGS4 Monte Carlo code system, together with the TSET user code, developed in this research, simulated the TSET technique accurately. However there were some problem areas. The central axis surface dose was underestimated by simulations and there was inconsistency associated with radial distribution of bremsstrahlung contamination. Monte Carlo simulation of the TSET technique predicted a 0.059 mm thickness of lead as the optimal scatterer. This scatterer was predicted to produce a minimum uniformity of 73%, a d_{80%} of between 1.8-2.1 cm and a 30% increase in dose rate (as compared to the control scatterer). Moreover, the Monte Carlo simulations TSET technique and
Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.
2014-01-01
Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…
DEFF Research Database (Denmark)
Jensen, Jørgen Juncher
2010-01-01
It is well known from linear analyses in stochastic seaway that the mean out-crossing rate of a level r is given through the reliability index, defined as r divided by the standard deviation. Hence, the reliability index becomes inversely proportional to the significant wave height. For non......-linear processes the mean out-crossing rate depends non-linearly on the response level r and a good estimate can be found using the First Order Reliability Method (FORM), see e.g. Jensen and Capul (2006). The FORM analysis also shows that the reliability index is strictly inversely proportional to the significant...... wave height irrespectively of the non-linearity in the system. However, the FORM analysis only gives an approximation to the mean out-crossing rate. A more exact result can be obtained by Monte Carlo simulations, but the necessary length of the time domain simulations for very low out-crossing rates...
Ye, Hong-zhou; Jiang, Hong
2014-01-01
Materials with spin-crossover (SCO) properties hold great potentials in information storage and therefore have received a lot of concerns in the recent decades. The hysteresis phenomena accompanying SCO is attributed to the intermolecular cooperativity whose underlying mechanism may have a vibronic origin. In this work, a new vibronic Ising-like model in which the elastic coupling between SCO centers is included by considering harmonic stretching and bending (SAB) interactions is proposed and solved by Monte Carlo simulations. The key parameters in the new model, $k_1$ and $k_2$, corresponding to the elastic constant of the stretching and bending mode, respectively, can be directly related to the macroscopic bulk and shear modulus of the material in study, which can be readily estimated either based on experimental measurements or first-principles calculations. The convergence issue in the MC simulations of the thermal hysteresis has been carefully checked, and it was found that the stable hysteresis loop can...
Effect of dead materials on calorimeter response and Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Dharmaratna, W.G.D. (Florida State Univ., Tallahassee, FL (United States). Dept. of Physics)
1992-01-01
The D0 calorimeter system, cylindrical central calorimeter and two end calorimeters, is constructed with minimal cracks and dead regions in order to provide essentially hermetic coverage over the full solid angle. The effect of the existing few construction features which could perturb the uniformity of the calorimeter is studied in detail with beam tests. The results with the correction algorithms are presented. A comparison with the Monte Carlo simulation is made.
Monte Carlo simulations and fractional kinetics considerations for the Higuchi equation.
Dokoumetzidis, Aristides; Kosmidis, Kosmas; Macheras, Panos
2011-10-10
We highlight some physical and mathematical aspects relevant to the derivation and use of the Higuchi equation. More specifically, the application of the Higuchi equation to different geometries is discussed and Monte Carlo simulations to verify the validity of Higuchi law in one and two dimensions, as well as the derivation of the Higuchi equation under alternative boundary conditions making use of fractional calculus, are presented. Copyright © 2010 Elsevier B.V. All rights reserved.
Monte Carlo simulation of air sampling methods for the measurement of radon decay products.
Sima, Octavian; Luca, Aurelian; Sahagia, Maria
2017-08-01
A stochastic model of the processes involved in the measurement of the activity of the 222Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the 222Rn decay products concentrations in the air are realistically evaluated. Copyright © 2017 Elsevier Ltd. All rights reserved.
McStas 1.1: A tool for building neutron Monte Carlo simulations
DEFF Research Database (Denmark)
Lefmann, K.; Nielsen, K.; Tennant, D.A.
2000-01-01
McStas is a project to develop general tools for the creation of simulations of neutron scattering experiments. In this paper, we briefly introduce McStas and describe a particular application of the program: the Monte Carlo calculation of the resolution function of a standard triple-axis neutron...... scattering instrument. The method compares well with the analytical calculations of Popovici. (C) 2000 Elsevier Science B.V. All rights reserved....
Energy Technology Data Exchange (ETDEWEB)
Nomura, Yasushi [Department of Fuel Cycle Safety Research, Nuclear Safety Research Center, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Tamaki, Hitoshi [Department of Safety Research Technical Support, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Kanai, Shigeru [Fuji Research Institute Corporation, Tokyo (Japan)
2000-04-01
In a plant system consisting of complex equipments and components for a reprocessing facility, there might be grace time between an initiating event and a resultant serious accident, allowing operating personnel to take remedial actions, thus, terminating the ongoing accident sequence. A component Monte Carlo simulation computer program TITAN has been developed to analyze such a complex reliability model including the grace time without any difficulty to obtain an accident occurrence frequency. Firstly, basic methods for the component Monte Carlo simulation is introduced to obtain an accident occurrence frequency, and then, the basic performance such as precision, convergence, and parallelization of calculation, is shown through calculation of a prototype accident sequence model. As an example to illustrate applicability to a real scale plant model, a red oil explosion in a German reprocessing plant model is simulated to show that TITAN can give an accident occurrence frequency with relatively good accuracy. Moreover, results of uncertainty analyses by TITAN are rendered to show another performance, and a proposal is made for introducing of a new input-data format to adapt the component Monte Carlo simulation. The present paper describes the calculational method, performance, applicability to a real scale, and new proposal for the TITAN code. In the Appendixes, a conventional analytical method is shown to avoid complex and laborious calculation to obtain a strict solution of accident occurrence frequency, compared with Monte Carlo method. The user's manual and the list/structure of program are also contained in the Appendixes to facilitate TITAN computer program usage. (author)
Radiation hydrodynamics simulations of massive star formation using Monte Carlo radiation transfer
Harries, Tim J.; Haworth, Tom J.; Acreman, David
2012-01-01
We present a radiation hydrodynamics simulation of the formation of a massive star using a Monte Carlo treatment for the radiation field. We find that strong, high speed bipolar cavities are driven by the radiation from the protostar, and that accretion occurs stochastically from a circumstellar disc. We have computed spectral energy distributions and images at each timestep, which may in future be used to compare our models with photometric, spectroscopic, and interferometric observations of...
PeneloPET, a Monte Carlo PET simulation tool based on PENELOPE: features and validation
Energy Technology Data Exchange (ETDEWEB)
Espana, S; Herraiz, J L; Vicente, E; Udias, J M [Grupo de Fisica Nuclear, Departmento de Fisica Atomica, Molecular y Nuclear, Universidad Complutense de Madrid, Madrid (Spain); Vaquero, J J; Desco, M [Unidad de Medicina y CirugIa Experimental, Hospital General Universitario Gregorio Maranon, Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2009-03-21
Monte Carlo simulations play an important role in positron emission tomography (PET) imaging, as an essential tool for the research and development of new scanners and for advanced image reconstruction. PeneloPET, a PET-dedicated Monte Carlo tool, is presented and validated in this work. PeneloPET is based on PENELOPE, a Monte Carlo code for the simulation of the transport in matter of electrons, positrons and photons, with energies from a few hundred eV to 1 GeV. PENELOPE is robust, fast and very accurate, but it may be unfriendly to people not acquainted with the FORTRAN programming language. PeneloPET is an easy-to-use application which allows comprehensive simulations of PET systems within PENELOPE. Complex and realistic simulations can be set by modifying a few simple input text files. Different levels of output data are available for analysis, from sinogram and lines-of-response (LORs) histogramming to fully detailed list mode. These data can be further exploited with the preferred programming language, including ROOT. PeneloPET simulates PET systems based on crystal array blocks coupled to photodetectors and allows the user to define radioactive sources, detectors, shielding and other parts of the scanner. The acquisition chain is simulated in high level detail; for instance, the electronic processing can include pile-up rejection mechanisms and time stamping of events, if desired. This paper describes PeneloPET and shows the results of extensive validations and comparisons of simulations against real measurements from commercial acquisition systems. PeneloPET is being extensively employed to improve the image quality of commercial PET systems and for the development of new ones.
Energy Technology Data Exchange (ETDEWEB)
Kim, Hyun Suk; Ye, Sung Joon [Seoul National University, Seoul (Korea, Republic of); Smith, Martin B.; Koslowsky, Martin R. [Bubble Technology Industries Inc., Chalk River (Canada); Kwak, Sung Woo [Korea Institute of Nuclear Nonproliferation And Control (KINAC), Daejeon (Korea, Republic of); Kim Gee Hyun [Sejong University, Seoul (Korea, Republic of)
2017-03-15
Simultaneous detection of neutrons and gamma rays have become much more practicable, by taking advantage of good gamma-ray discrimination properties using pulse shape discrimination (PSD) technique. Recently, we introduced a commercial CLYC system in Korea, and performed an initial characterization and simulation studies for the CLYC detector system to provide references for the future implementation of the dual-mode scintillator system in various studies and applications. We evaluated a CLYC detector with 95% 6Li enrichment using various gamma-ray sources and a 252Cf neutron source, with validation of our Monte Carlo simulation results via measurement experiments. Absolute full-energy peak efficiency values were calculated for gamma-ray sources and neutron source using MCNP6 and compared with measurement experiments of the calibration sources. In addition, behavioral characteristics of neutrons were validated by comparing simulations and experiments on neutron moderation with various polyethylene (PE) moderator thicknesses. Both results showed good agreements in overall characteristics of the gamma and neutron detection efficiencies, with consistent ⁓20% discrepancy. Furthermore, moderation of neutrons emitted from {sup 252}Cf showed similarities between the simulation and the experiment, in terms of their relative ratios depending on the thickness of the PE moderator. A CLYC detector system was characterized for its energy resolution and detection efficiency, and Monte Carlo simulations on the detector system was validated experimentally. Validation of the simulation results in overall trend of the CLYC detector behavior will provide the fundamental basis and validity of follow-up Monte Carlo simulation studies for the development of our dual-particle imager using a rotational modulation collimator.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
Directory of Open Access Journals (Sweden)
M. S. Mayeed
2014-01-01
Full Text Available Applying the reptation algorithm to a simplified perfluoropolyether Z off-lattice polymer model an NVT Monte Carlo simulation has been performed. Bulk condition has been simulated first to compare the average radius of gyration with the bulk experimental results. Then the model is tested for its ability to describe dynamics. After this, it is applied to observe the replenishment of nanoscale ultrathin liquid films on solid flat carbon surfaces. The replenishment rate for trenches of different widths (8, 12, and 16 nms for several molecular weights between two films of perfluoropolyether Z from the Monte Carlo simulation is compared to that obtained solving the diffusion equation using the experimental diffusion coefficients of Ma et al. (1999, with room condition in both cases. Replenishment per Monte Carlo cycle seems to be a constant multiple of replenishment per second at least up to 2 nm replenished film thickness of the trenches over the carbon surface. Considerable good agreement has been achieved here between the experimental results and the dynamics of molecules using reptation moves in the ultrathin liquid films on solid surfaces.
On NonAsymptotic Optimal Stopping Criteria in Monte Carlo Simulations
Bayer, Christian
2014-01-01
We consider the setting of estimating the mean of a random variable by a sequential stopping rule Monte Carlo (MC) method. The performance of a typical second moment based sequential stopping rule MC method is shown to be unreliable in such settings both by numerical examples and through analysis. By analysis and approximations, we construct a higher moment based stopping rule which is shown in numerical examples to perform more reliably and only slightly less efficiently than the second moment based stopping rule.
Molecular dynamics and dynamic Monte-Carlo simulation of irradiation damage with focused ion beams
Ohya, Kaoru
2017-03-01
The focused ion beam (FIB) has become an important tool for micro- and nanostructuring of samples such as milling, deposition and imaging. However, this leads to damage of the surface on the nanometer scale from implanted projectile ions and recoiled material atoms. It is therefore important to investigate each kind of damage quantitatively. We present a dynamic Monte-Carlo (MC) simulation code to simulate the morphological and compositional changes of a multilayered sample under ion irradiation and a molecular dynamics (MD) simulation code to simulate dose-dependent changes in the backscattering-ion (BSI)/secondary-electron (SE) yields of a crystalline sample. Recent progress in the codes for research to simulate the surface morphology and Mo/Si layers intermixing in an EUV lithography mask irradiated with FIBs, and the crystalline orientation effect on BSI and SE yields relating to the channeling contrast in scanning ion microscopes, is also presented.
Characterization of a cylindrical plastic β-detector with Monte Carlo simulations of optical photons
Energy Technology Data Exchange (ETDEWEB)
Guadilla, V., E-mail: victor.guadilla@ific.uv.es [Instituto de Física Corpuscular, CSIC-Universidad de Valencia, E-46071 Valencia (Spain); Algora, A. [Instituto de Física Corpuscular, CSIC-Universidad de Valencia, E-46071 Valencia (Spain); Institute of Nuclear Research of the Hungarian Academy of Sciences, Debrecen H-4026 (Hungary); Tain, J.L.; Agramunt, J. [Instituto de Física Corpuscular, CSIC-Universidad de Valencia, E-46071 Valencia (Spain); Äystö, J. [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 (Finland); Briz, J.A.; Cucoanes, A. [Subatech, CNRS/IN2P3, Nantes, EMN, F-44307 Nantes (France); Eronen, T. [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 (Finland); Estienne, M.; Fallot, M. [Subatech, CNRS/IN2P3, Nantes, EMN, F-44307 Nantes (France); Fraile, L.M. [Universidad Complutense, Grupo de Física Nuclear, CEI Moncloa, E-28040 Madrid (Spain); Ganioğlu, E. [Department of Physics, Istanbul University, 34134 Istanbul (Turkey); Gelletly, W. [Instituto de Física Corpuscular, CSIC-Universidad de Valencia, E-46071 Valencia (Spain); Department of Physics, University of Surrey, GU2 7XH Guildford (United Kingdom); Gorelov, D.; Hakala, J.; Jokinen, A. [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 (Finland); Jordan, D. [Instituto de Física Corpuscular, CSIC-Universidad de Valencia, E-46071 Valencia (Spain); Kankainen, A.; Kolhinen, V.; Koponen, J. [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 (Finland); and others
2017-05-11
In this work we report on the Monte Carlo study performed to understand and reproduce experimental measurements of a new plastic β-detector with cylindrical geometry. Since energy deposition simulations differ from the experimental measurements for such a geometry, we show how the simulation of production and transport of optical photons does allow one to obtain the shapes of the experimental spectra. Moreover, taking into account the computational effort associated with this kind of simulation, we develop a method to convert the simulations of energy deposited into light collected, depending only on the interaction point in the detector. This method represents a useful solution when extensive simulations have to be done, as in the case of the calculation of the response function of the spectrometer in a total absorption γ-ray spectroscopy analysis.
Tajik-Mansoury, M. A.; Rajabi, H.; Mazdarani, H.
2017-03-01
The S-value is a standard measure in cellular dosimetry. S-values are calculated by applying analytical methods or by Monte Carlo simulation. In Monte Carlo simulation, particles are either tracked individually event-by-event or close events are condensed and processed collectively in different steps. Both of these methods have been employed for estimation of cellular S-values, but there is no consistency between the published results. In the present paper, we used the Geant4-DNA track-structure physics model as the reference to estimate the cellular S-values. We compared the results with the corresponding values obtained from the following three condensed-history physics models of Geant4: Penelope, Livermore and standard. The geometry and source were exactly the same in all the simulations. We utilized mono-energetic electrons with an initial kinetic energy in the range 1-700 keV as the source of radiation. We also compared our results with the MIRD S-values. We first drew an overall comparison between different data series and then compared the dependence of results on the energy of particles and the size of scoring compartments. The overall comparison indicated a very good linear correlation (R 2 > 91%) and small bias (3%) between the results of the track-structure model and the condensed-history physics model. The bias between MIRD and the results of Monte Carlo track-structure simulation was considerable (-8%). However, the point-by-point comparison revealed differences of up to 28% between the condensed-history and the track-structure MC codes for self-absorption S-values in the 10-50 keV energy range. For the cross-absorption S-values, the difference was up to 34%. In this energy range, the difference between the MIRD S-values and the Geant4-DNA results was up to 68%. Our findings suggest that the consistency/inconsistency of the results obtained with different MC simulations depends on the size of the scoring volumes, the energy of the
A concept for optimizing avalanche rescue strategies using a Monte Carlo simulation approach
Paal, Peter; Schweizer, Jürg
2017-01-01
Recent technical and strategical developments have increased the survival chances for avalanche victims. Still hundreds of people, primarily recreationists, get caught and buried by snow avalanches every year. About 100 die each year in the European Alps–and many more worldwide. Refining concepts for avalanche rescue means to optimize the procedures such that the survival chances are maximized in order to save the greatest possible number of lives. Avalanche rescue includes several parameters related to terrain, natural hazards, the people affected by the event, the rescuers, and the applied search and rescue equipment. The numerous parameters and their complex interaction make it unrealistic for a rescuer to take, in the urgency of the situation, the best possible decisions without clearly structured, easily applicable decision support systems. In order to analyse which measures lead to the best possible survival outcome in the complex environment of an avalanche accident, we present a numerical approach, namely a Monte Carlo simulation. We demonstrate the application of Monte Carlo simulations for two typical, yet tricky questions in avalanche rescue: (1) calculating how deep one should probe in the first passage of a probe line depending on search area, and (2) determining for how long resuscitation should be performed on a specific patient while others are still buried. In both cases, we demonstrate that optimized strategies can be calculated with the Monte Carlo method, provided that the necessary input data are available. Our Monte Carlo simulations also suggest that with a strict focus on the "greatest good for the greatest number", today's rescue strategies can be further optimized in the best interest of patients involved in an avalanche accident. PMID:28467434
A concept for optimizing avalanche rescue strategies using a Monte Carlo simulation approach.
Reiweger, Ingrid; Genswein, Manuel; Paal, Peter; Schweizer, Jürg
2017-01-01
Recent technical and strategical developments have increased the survival chances for avalanche victims. Still hundreds of people, primarily recreationists, get caught and buried by snow avalanches every year. About 100 die each year in the European Alps-and many more worldwide. Refining concepts for avalanche rescue means to optimize the procedures such that the survival chances are maximized in order to save the greatest possible number of lives. Avalanche rescue includes several parameters related to terrain, natural hazards, the people affected by the event, the rescuers, and the applied search and rescue equipment. The numerous parameters and their complex interaction make it unrealistic for a rescuer to take, in the urgency of the situation, the best possible decisions without clearly structured, easily applicable decision support systems. In order to analyse which measures lead to the best possible survival outcome in the complex environment of an avalanche accident, we present a numerical approach, namely a Monte Carlo simulation. We demonstrate the application of Monte Carlo simulations for two typical, yet tricky questions in avalanche rescue: (1) calculating how deep one should probe in the first passage of a probe line depending on search area, and (2) determining for how long resuscitation should be performed on a specific patient while others are still buried. In both cases, we demonstrate that optimized strategies can be calculated with the Monte Carlo method, provided that the necessary input data are available. Our Monte Carlo simulations also suggest that with a strict focus on the "greatest good for the greatest number", today's rescue strategies can be further optimized in the best interest of patients involved in an avalanche accident.
Datema, C P; Eijk, C W E
2002-01-01
Experiments were carried out to investigate the possible use of neutron backscattering for the detection of landmines buried in the soil. Several landmines, buried in a sand-pit, were positively identified. A series of Monte Carlo simulations were performed to study the complexity of the neutron backscattering process and to optimize the geometry of a future prototype. The results of these simulations indicate that this method shows great potential for the detection of non-metallic landmines (with a plastic casing), for which so far no reliable method has been found.
Particle-in-cell/Monte Carlo simulation of filamentary barrier discharges
Weili, FAN; Zhengming, SHENG; Fucheng, LIU
2017-11-01
The plasma behavior of filamentary barrier discharges in helium is simulated using a two-dimensional (2D) particle-in-cell/Monte Carlo model. Four different phases have been suggested in terms of the development of the discharge: the Townsend phase; the space-charge dominated phase; the formation of the cathode layer, and the extinguishing phase. The spatial-temporal evolution of the particle densities, velocities of the charged particles, electric fields, and surface charges has been demonstrated. Our simulation provides insights into the underlying mechanism of the discharge and explains many dynamical behaviors of dielectric barrier discharge (DBD) filaments.
DEFF Research Database (Denmark)
Debrabant, Kristian; Samaey, Giovanni; Zieliński, Przemysław
2017-01-01
We present and analyse a micro-macro acceleration method for the Monte Carlo simulation of stochastic differential equations with separation between the (fast) time-scale of individual trajectories and the (slow) time-scale of the macroscopic function of interest. The algorithm combines short...... bursts of path simulations with extrapolation of a number of macroscopic state variables forward in time. The new microscopic state, consistent with the extrapolated variables, is obtained by a matching operator that minimises the perturbation caused by the extrapolation. We provide a proof...
Monte Carlo simulation of a single detector unit for the neutron detector array NEDA
Energy Technology Data Exchange (ETDEWEB)
Jaworski, G. [Faculty of Physics, Warsaw University of Technology, ul. Koszykowa 75, 00-662 Warszawa (Poland); Heavy Ion Laboratory, University of Warsaw, ul. Pasteura 5A, PL 02-093 Warszawa (Poland); Palacz, M., E-mail: palacz@slcj.uw.edu.pl [Heavy Ion Laboratory, University of Warsaw, ul. Pasteura 5A, PL 02-093 Warszawa (Poland); Nyberg, J. [Department of Physics and Astronomy, Uppsala University, Uppsala (Sweden); Angelis, G. de [INFN, Laboratori Nazionali di Legnaro, Legnaro (Italy); France, G. de [GANIL, Caen (France); Di Nitto, A. [INFN Sezione di Napoli, Napoli (Italy); Egea, J. [Department of Electronic Engineering, University of Valencia, Burjassot (Valencia) (Spain); IFIC-CSIC, University of Valencia, Valencia (Spain); Erduran, M.N. [Faculty of Engineering and Natural Sciences, Istanbul Sabahattin Zaim University Istanbul (Turkey); Ertuerk, S. [Nigde Universitesi, Fen-Edebiyat Falkueltesi, Fizik Boeluemue, Nigde (Turkey); Farnea, E. [INFN Sezione di Padova, Padua (Italy); Gadea, A. [IFIC-CSIC, University of Valencia, Valencia (Spain); Gonzalez, V. [Department of Electronic Engineering, University of Valencia, Burjassot (Valencia) (Spain); Gottardo, A. [Padova University, Padua (Italy); Hueyuek, T. [IFIC-CSIC, University of Valencia, Valencia (Spain); Kownacki, J. [Heavy Ion Laboratory, University of Warsaw, ul. Pasteura 5A, PL 02-093 Warszawa (Poland); Pipidis, A. [INFN, Laboratori Nazionali di Legnaro, Legnaro (Italy); Roeder, B. [LPC-Caen, ENSICAEN, IN2P3/CNRS et Universite de Caen, Caen (France); Soederstroem, P.-A. [Department of Physics and Astronomy, Uppsala University, Uppsala (Sweden); Sanchis, E. [Department of Electronic Engineering, University of Valencia, Burjassot (Valencia) (Spain); Tarnowski, R. [Heavy Ion Laboratory, University of Warsaw, ul. Pasteura 5A, PL 02-093 Warszawa (Poland); and others
2012-05-01
A study of the dimensions and performance of a single detector of the future neutron detector array NEDA was performed by means of Monte Carlo simulations, using GEANT4. Two different liquid scintillators were evaluated: the hydrogen based BC501A and the deuterated BC537. The efficiency and the probability that one neutron will trigger a signal in more than one detector were investigated as a function of the detector size. The simulations were validated comparing the results to experimental measurements performed with two existing neutron detectors, with different geometries, based on the liquid scintillator BC501.
Monte Carlo simulation of spectrum changes in a photon beam due to a brass compensator
Energy Technology Data Exchange (ETDEWEB)
Custidiano, E.R., E-mail: ernesto7661@gmail.com [Department of Physics, FaCENA, UNNE, Av., Libertad 5470, C.P.3400, Corrientes (Argentina); Valenzuela, M.R., E-mail: meraqval@gmail.com [Department of Physics, FaCENA, UNNE, Av., Libertad 5470, C.P.3400, Corrientes (Argentina); Dumont, J.L., E-mail: Joseluis.Dumont@elekta.com [Elekta CMS Software, St.Louis, MO (United States); McDonnell, J., E-mail: josemc@express.com.ar [Cumbres Institute, Riobamba 1745, C.P.2000, Rosario, Santa Fe (Argentina); Rene, L, E-mail: luismrene@gmail.com [Radiotherapy Center, Crespo 953, C.P.2000, Rosario, Santa Fe (Argentina); Rodriguez Aguirre, J.M., E-mail: juakcho@gmail.com [Department of Physics, FaCENA, UNNE, Av., Libertad 5470, C.P.3400, Corrientes (Argentina)
2011-06-15
Monte Carlo simulations were used to study the changes in the incident spectrum when a poly-energetic photon beam passes through a static brass compensator. The simulated photon beam spectrum was evaluated by comparing it against the incident spectra. We also discriminated the changes in the transmitted spectrum produced by each of the microscopic processes. (i.e. Rayleigh scattering, photoelectric effect, Compton scattering, and pair production). The results show that the relevant process in the energy range considered is the Compton Effect, as expected for composite materials of intermediate atomic number and energy range considered.
Erdem, Riza; Aydiner, Ekrem
2009-03-01
Voltage-gated ion channels are key molecules for the generation and propagation of electrical signals in excitable cell membranes. The voltage-dependent switching of these channels between conducting and nonconducting states is a major factor in controlling the transmembrane voltage. In this study, a statistical mechanics model of these molecules has been discussed on the basis of a two-dimensional spin model. A new Hamiltonian and a new Monte Carlo simulation algorithm are introduced to simulate such a model. It was shown that the results well match the experimental data obtained from batrachotoxin-modified sodium channels in the squid giant axon using the cut-open axon technique.
Complete model description of an electron beam using ACCEPT Monte Carlo simulation code
Energy Technology Data Exchange (ETDEWEB)
Weiss, D.E. [Corporate Research Process Technologies Lab., St. Paul, MN (United States); Kensek, R.P. [Sandia National Labs., Albuquerque, NM (United States)
1993-12-31
A 3D model of a low voltage electron beam has been constructed using the ITS/ACCEPT Monte Carlo code in order to validate the code for this application and improve upon 1D slab geometry simulations. A line source description update to the code allows complete simulation of a low voltage electron beam with any filament length. Faithful reproduction of the geometric elements involved, especially the window support structure, can account for 90--95% of the dose received by routine dosimetry. With a 3D model, dose distributions in non-web articles can be determined and the effects of equipment modifications can be anticipated in advance.
Theory and Monte-Carlo simulation of adsorbates on corrugated surfaces
DEFF Research Database (Denmark)
Vives, E.; Lindgård, P.-A.
1993-01-01
-phase between the commensurate and incommensurate phase stabilized by defects. Special attention has been given to the study of the epitaxial rotation angles of the different phases. Available experimental data is in agreement with the simulations and with a general theory for the epitaxial rotation which takes......Phase transitions in systems of adsorbed molecules on corrugated surfaces are studied by means of Monte Carlo simulation. Particularly, we have studied the phase diagram of D2 on graphite as a function of coverage and temperature. We have demonstrated the existence of an intermediate gamma...
Monte Carlo simulations of neutron-scattering instruments using McStas
DEFF Research Database (Denmark)
Nielsen, K.; Lefmann, K.
2000-01-01
Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Rise National Laboratory, includes...... an extension language that makes it easy to adapt it to the particular requirements of individual instruments, and thus provides a powerful and flexible tool for constructing such simulations. McStas has been successfully applied in such areas as neutron guide design, flux optimization, non-Gaussian resolution...
Shuttle vertical fin flowfield by the direct simulation Monte Carlo method
Hueser, J. E.; Brock, F. J.; Melfi, L. T.
1985-01-01
The flow properties in a model flowfield, simulating the shuttle vertical fin, determined using the Direct Simulation Monte Carlo method. The case analyzed corresponds to an orbit height of 225 km with the freestream velocity vector orthogonal to the fin surface. Contour plots of the flowfield distributions of density, temperature, velocity and flow angle are presented. The results also include mean molecular collision frequency (which reaches 1/60 sec near the surface), collision frequency density (approaches 7 x 10 to the 18/cu m sec at the surface) and the mean free path (19 m at the surface).
Monte Carlo simulation of pulsed neutron experiments on samples of variable mass density
Energy Technology Data Exchange (ETDEWEB)
Dabrowska, Joanna; Drozdowicz, Krzysztof E-mail: Krzysztof.Drozdowicz@ifj.edu.pl
2000-04-01
A method is presented to facilitate the interpretation of a pulsed neutron experiment (the variable geometric buckling experiment) when the mass densities of individual samples differ. A generalisation of the classic expression, which connects the fundamental mode decay constant to the thermal neutron diffusion parameters and to the geometrical buckling, is presented. The method has been tested (on polyethylene) by means of a computer simulation of the experiments. The simulation has been based on a Monte Carlo method, using the MCNP code. The described generalised buckling method is especially recommended for experiments with bulk materials.
Commissioning of a Varian Clinac iX 6 MV photon beam using Monte Carlo simulation
Dirgayussa, I. Gde Eka; Yani, Sitti; Rhani, M. Fahdillah; Haryanto, Freddy
2015-09-01
Monte Carlo modelling of a linear accelerator is the first and most important step in Monte Carlo dose calculations in radiotherapy. Monte Carlo is considered today to be the most accurate and detailed calculation method in different fields of medical physics. In this research, we developed a photon beam model for Varian Clinac iX 6 MV equipped with MilleniumMLC120 for dose calculation purposes using BEAMnrc/DOSXYZnrc Monte Carlo system based on the underlying EGSnrc particle transport code. Monte Carlo simulation for this commissioning head LINAC divided in two stages are design head Linac model using BEAMnrc, characterize this model using BEAMDP and analyze the difference between simulation and measurement data using DOSXYZnrc. In the first step, to reduce simulation time, a virtual treatment head LINAC was built in two parts (patient-dependent component and patient-independent component). The incident electron energy varied 6.1 MeV, 6.2 MeV and 6.3 MeV, 6.4 MeV, and 6.6 MeV and the FWHM (full width at half maximum) of source is 1 mm. Phase-space file from the virtual model characterized using BEAMDP. The results of MC calculations using DOSXYZnrc in water phantom are percent depth doses (PDDs) and beam profiles at depths 10 cm were compared with measurements. This process has been completed if the dose difference of measured and calculated relative depth-dose data along the central-axis and dose profile at depths 10 cm is ≤ 5%. The effect of beam width on percentage depth doses and beam profiles was studied. Results of the virtual model were in close agreement with measurements in incident energy electron 6.4 MeV. Our results showed that photon beam width could be tuned using large field beam profile at the depth of maximum dose. The Monte Carlo model developed in this study accurately represents the Varian Clinac iX with millennium MLC 120 leaf and can be used for reliable patient dose calculations. In this commissioning process, the good criteria of dose
Energy Technology Data Exchange (ETDEWEB)
Sahoo, G.S. [Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Tripathy, S.P., E-mail: sam.tripathy@gmail.com [Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Homi Bhabha National Institute, Mumbai 400094 (India); Molokanov, A.G.; Aleynikov, V.E. [Joint Institute for Nuclear Research, Dubna 141980 (Russian Federation); Sharma, S.D. [Homi Bhabha National Institute, Mumbai 400094 (India); Radiological Physics & Advisory Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Bandyopadhyay, T. [Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Homi Bhabha National Institute, Mumbai 400094 (India)
2016-05-11
In this work, we have used CR-39 detectors to estimate the LET (linear energy transfer) spectrum of secondary particles due to 171 MeV proton beam at different depths of water including the Bragg peak region. The measured LET spectra were compared with those obtained from FLUKA Monte Carlo simulation. The absorbed dose (D{sub LET}), dose equivalent (H{sub LET}) were estimated using the LET spectra. The values of D{sub LET} and H{sub LET} per incident proton fluence were found to increase with the increase in depth of water and were maximum at Bragg peak. - Highlights: • Measurement of LET spectrometry using CR-39 detectors at different depths of water. • Comparison of measured spectra with FLUKA Monte carlo simulation. • Absorbed dose and dose equivalent was found to increase with depth of water.
Energy Technology Data Exchange (ETDEWEB)
Winnischofer, Herbert; Araujo, Marcio Peres de; Dias Junior, Lauro Camargo; Novo, Joao Batista Marques [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil)
2010-07-01
A software based in the Monte Carlo method have been developed aiming the teaching of important cases of mechanisms found in luminescence and in excited states decay kinetics, including: multiple decays, consecutive decays and coupled systems decays. The Monte Carlo Method allows the student to easily simulate and visualize the luminescence mechanisms, focusing on the probabilities of the related steps. The software CINESTEX was written for FreeBASIC compiler; it assumes first-order kinetics and any number of excited states, where the pathways are allowed with probabilities assigned by the user. (author)
Ródenas, José
2017-11-01
All materials exposed to some neutron flux can be activated independently of the kind of the neutron source. In this study, a nuclear reactor has been considered as neutron source. In particular, the activation of control rods in a BWR is studied to obtain the doses produced around the storage pool for irradiated fuel of the plant when control rods are withdrawn from the reactor and installed into this pool. It is very important to calculate these doses because they can affect to plant workers in the area. The MCNP code based on the Monte Carlo method has been applied to simulate activation reactions produced in the control rods inserted into the reactor. Obtained activities are introduced as input into another MC model to estimate doses produced by them. The comparison of simulation results with experimental measurements allows the validation of developed models. The developed MC models have been also applied to simulate the activation of other materials, such as components of a stainless steel sample introduced into a training reactors. These models, once validated, can be applied to other situations and materials where a neutron flux can be found, not only nuclear reactors. For instance, activation analysis with an Am-Be source, neutrography techniques in both medical applications and non-destructive analysis of materials, civil engineering applications using a Troxler, analysis of materials in decommissioning of nuclear power plants, etc.
Efficient Simulation of Secondary Fluorescence Via NIST DTSA-II Monte Carlo.
Ritchie, Nicholas W M
2017-06-01
Secondary fluorescence, the final term in the familiar matrix correction triumvirate Z·A·F, is the most challenging for Monte Carlo models to simulate. In fact, only two implementations of Monte Carlo models commonly used to simulate electron probe X-ray spectra can calculate secondary fluorescence-PENEPMA and NIST DTSA-II a (DTSA-II is discussed herein). These two models share many physical models but there are some important differences in the way each implements X-ray emission including secondary fluorescence. PENEPMA is based on PENELOPE, a general purpose software package for simulation of both relativistic and subrelativistic electron/positron interactions with matter. On the other hand, NIST DTSA-II was designed exclusively for simulation of X-ray spectra generated by subrelativistic electrons. NIST DTSA-II uses variance reduction techniques unsuited to general purpose code. These optimizations help NIST DTSA-II to be orders of magnitude more computationally efficient while retaining detector position sensitivity. Simulations execute in minutes rather than hours and can model differences that result from detector position. Both PENEPMA and NIST DTSA-II are capable of handling complex sample geometries and we will demonstrate that both are of similar accuracy when modeling experimental secondary fluorescence data from the literature.
Simulation of Gamma Rays Attenuation Through Matters Using the Monte Carlo Program
Sukara, S.; Rimjeam, S.
2017-09-01
This research focuses on simulation of the radiation attenuation using a Monte Carlo program called GEANT4. In the simulation, properties and geometries of the shielding system including thickness and element of the shielding material can be varied. The radiation in gamma rays regime is considered to be emitted from a Cs-137 radioactive source. The number of gamma photons at specific energy of 661.7 keV is calculated to compare the ability of radiation attenuation for different shielding materials with variable thickness. In addition, the experimental investigation was performed for three materials, which are lead, aluminum and iron, by using a NaI(Tl) scintillation detector. Then, the XCOM database were calculated to compare the results with the simulation. Both XCOM and simulation data as well as the experimental results are agreed well to the theoretical suggestion. Consequently, the results from Monte Carlo simulation with program GEANT4 can be used to design the radiation shielding system for radioactive laboratories, particle accelerator institutes, radiotherapy area in hospitals, nuclear power plants, etc.
Monte-Carlo simulations of neutron shielding for the ATLAS forward region
Stekl, I; Kovalenko, V E; Vorobel, V; Leroy, C; Piquemal, F; Eschbach, R; Marquet, C
2000-01-01
The effectiveness of different types of neutron shielding for the ATLAS forward region has been studied by means of Monte-Carlo simulations and compared with the results of an experiment performed at the CERN PS. The simulation code is based on GEANT, FLUKA, MICAP and GAMLIB. GAMLIB is a new library including processes with gamma-rays produced in (n, gamma), (n, n'gamma) neutron reactions and is interfaced to the MICAP code. The effectiveness of different types of shielding against neutrons and gamma-rays, composed from different types of material, such as pure polyethylene, borated polyethylene, lithium-filled polyethylene, lead and iron, were compared. The results from Monte-Carlo simulations were compared to the results obtained from the experiment. The simulation results reproduce the experimental data well. This agreement supports the correctness of the simulation code used to describe the generation, spreading and absorption of neutrons (up to thermal energies) and gamma-rays in the shielding materials....
Constraining physical parameters of ultra-fast outflows in PDS 456 with Monte Carlo simulations
Hagino, K.; Odaka, H.; Done, C.; Gandhi, P.; Takahashi, T.
2014-07-01
Deep absorption lines with extremely high velocity of ˜0.3c observed in PDS 456 spectra strongly indicate the existence of ultra-fast outflows (UFOs). However, the launching and acceleration mechanisms of UFOs are still uncertain. One possible way to solve this is to constrain physical parameters as a function of distance from the source. In order to study the spatial dependence of parameters, it is essential to adopt 3-dimensional Monte Carlo simulations that treat radiation transfer in arbitrary geometry. We have developed a new simulation code of X-ray radiation reprocessed in AGN outflow. Our code implements radiative transfer in 3-dimensional biconical disk wind geometry, based on Monte Carlo simulation framework called MONACO (Watanabe et al. 2006, Odaka et al. 2011). Our simulations reproduce FeXXV and FeXXVI absorption features seen in the spectra. Also, broad Fe emission lines, which reflects the geometry and viewing angle, is successfully reproduced. By comparing the simulated spectra with Suzaku data, we obtained constraints on physical parameters. We discuss launching and acceleration mechanisms of UFOs in PDS 456 based on our analysis.
Risk management of a torrential flood construction project using the Monte Carlo simulation
Directory of Open Access Journals (Sweden)
Baumgertel Aleksandar
2016-01-01
Full Text Available Projects for the regulation of torrent basins carry various unforeseen adverse effects that may result in breached deadlines, increased costs, a reduction of quality etc. The paper presents the basic characteristics and most frequent risks associated with erosion control. Furthermore, it provides an overview of risk management through its basic stages - starting from risk identification and risk analysis to risk responses, including the methods used for risk analysis. As a part of quantitative methods for risk analysis, the Monte Carlo method is presented as the one most frequently used in simulations. The Monte Carlo method is a stochastic simulation method consisting of the following stages: the identification of criterion and relevant variables, the allocation of probability for relevant variables, the determination of correlation coefficient among relevant variables, simulation execution and result analysis. This method was applied in the analysis of the total cost of the project for the basin regulation of the Dumača River in order to determine the funding that would be used as a backup in case of unforeseen events with a negative impact. The project for the regulation of the Dumača River includes basin regulation in the form of complex flow profile and the lining of zones where necessary in terms of stability. The total cost is presented as a sum of costs of all works (preliminary works, earthworks, masonry works, concrete works and finishing works. The Monte Carlo simulation for cost analysis is carried out using the Oracle Crystal Ball software with its basic steps described in the paper. A sum of funding needed as a financial backup in case of unforeseen events with negative effects is obtained as the simulated total cost of the project.
GATE Monte Carlo simulation of dose distribution using MapReduce in a cloud computing environment.
Liu, Yangchuan; Tang, Yuguo; Gao, Xin
2017-12-01
The GATE Monte Carlo simulation platform has good application prospects of treatment planning and quality assurance. However, accurate dose calculation using GATE is time consuming. The purpose of this study is to implement a novel cloud computing method for accurate GATE Monte Carlo simulation of dose distribution using MapReduce. An Amazon Machine Image installed with Hadoop and GATE is created to set up Hadoop clusters on Amazon Elastic Compute Cloud (EC2). Macros, the input files for GATE, are split into a number of self-contained sub-macros. Through Hadoop Streaming, the sub-macros are executed by GATE in Map tasks and the sub-results are aggregated into final outputs in Reduce tasks. As an evaluation, GATE simulations were performed in a cubical water phantom for X-ray photons of 6 and 18 MeV. The parallel simulation on the cloud computing platform is as accurate as the single-threaded simulation on a local server and the simulation correctness is not affected by the failure of some worker nodes. The cloud-based simulation time is approximately inversely proportional to the number of worker nodes. For the simulation of 10 million photons on a cluster with 64 worker nodes, time decreases of 41× and 32× were achieved compared to the single worker node case and the single-threaded case, respectively. The test of Hadoop's fault tolerance showed that the simulation correctness was not affected by the failure of some worker nodes. The results verify that the proposed method provides a feasible cloud computing solution for GATE.
A modular method to handle multiple time-dependent quantities in Monte Carlo simulations
Shin, J.; Perl, J.; Schümann, J.; Paganetti, H.; Faddegon, B. A.
2012-06-01
A general method for handling time-dependent quantities in Monte Carlo simulations was developed to make such simulations more accessible to the medical community for a wide range of applications in radiotherapy, including fluence and dose calculation. To describe time-dependent changes in the most general way, we developed a grammar of functions that we call ‘Time Features’. When a simulation quantity, such as the position of a geometrical object, an angle, a magnetic field, a current, etc, takes its value from a Time Feature, that quantity varies over time. The operation of time-dependent simulation was separated into distinct parts: the Sequence samples time values either sequentially at equal increments or randomly from a uniform distribution (allowing quantities to vary continuously in time), and then each time-dependent quantity is calculated according to its Time Feature. Due to this modular structure, time-dependent simulations, even in the presence of multiple time-dependent quantities, can be efficiently performed in a single simulation with any given time resolution. This approach has been implemented in TOPAS (TOol for PArticle Simulation), designed to make Monte Carlo simulations with Geant4 more accessible to both clinical and research physicists. To demonstrate the method, three clinical situations were simulated: a variable water column used to verify constancy of the Bragg peak of the Crocker Lab eye treatment facility of the University of California, the double-scattering treatment mode of the passive beam scattering system at Massachusetts General Hospital (MGH), where a spinning range modulator wheel accompanied by beam current modulation produces a spread-out Bragg peak, and the scanning mode at MGH, where time-dependent pulse shape, energy distribution and magnetic fields control Bragg peak positions. Results confirm the clinical applicability of the method.
Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices
Energy Technology Data Exchange (ETDEWEB)
Zhang, Guoqing
2011-12-22
Monte Carlo methods based on random sampling are widely used in different fields for the capability of solving problems with a large number of coupled degrees of freedom. In this work, Monte Carlos methods are successfully applied for the simulation of the mixed neutron-gamma field in an interim storage facility and neutron dosimeters of different types. Details are discussed in two parts: In the first part, the method of simulating an interim storage facility loaded with CASTORs is presented. The size of a CASTOR is rather large (several meters) and the CASTOR wall is very thick (tens of centimeters). Obtaining the results of dose rates outside a CASTOR with reasonable errors costs usually hours or even days. For the simulation of a large amount of CASTORs in an interim storage facility, it needs weeks or even months to finish a calculation. Variance reduction techniques were used to reduce the calculation time and to achieve reasonable relative errors. Source clones were applied to avoid unnecessary repeated calculations. In addition, the simulations were performed on a cluster system. With the calculation techniques discussed above, the efficiencies of calculations can be improved evidently. In the second part, the methods of simulating the response of neutron dosimeters are presented. An Alnor albedo dosimeter was modelled in MCNP, and it has been simulated in the facility to calculate the calibration factor to get the evaluated response to a Cf-252 source. The angular response of Makrofol detectors to fast neutrons has also been investigated. As a kind of SSNTD, Makrofol can detect fast neutrons by recording the neutron induced heavy charged recoils. To obtain the information of charged recoils, general-purpose Monte Carlo codes were used for transporting incident neutrons. The response of Makrofol to fast neutrons is dependent on several factors. Based on the parameters which affect the track revealing, the formation of visible tracks was determined. For
Evaluation of the seismic hazard for 20 cities in Romania using Monte Carlo based simulations
Pavel, Florin; Vacareanu, Radu
2017-07-01
This work focuses on the evaluation of the seismic hazard for Romania using earthquake catalogues generated by a Monte Carlo approach. The seismicity of Romania can be attributed to the Vrancea intermediate-depth seismic source and to 13 other crustal seismic sources. The recurrence times of large magnitude seismic events (both crustal and subcrustal), as well as the moment release rates are computed using simulated earthquake catalogues. The results show that the largest contribution to the overall moment release for the crustal seismic sources is from the seismic regions in Bulgaria, while the seismic regions in Romania contribute less than 5% of the overall moment release. In addition, the computations show that the moment release rate for the Vrancea subcrustal seismic source is about ten times larger than that of all the crustal seismic sources. Finally, the Monte Carlo approach is used to evaluate the seismic hazard for 20 cities in Romania with populations larger than 100,000 inhabitants. The results show some differences between the seismic hazard values obtained through Monte-Carlo simulation and those in the Romanian seismic design code P100-1/2013, notably for cities situated in the western part of Romania that are influenced by local crustal seismic sources.
Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes
Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R
2001-01-01
This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...
Monte Carlo simulation of radiative heat transfer in coarse fibrous media
Energy Technology Data Exchange (ETDEWEB)
Nisipeanu, E.; Jones, P.D.
1999-07-01
Radiative transfer through a medium made up of a multitude of randomly oriented opaque cylindrical fibers is examined using Monte Carlo simulation of multiple surface radiative exchange for energy bundles interacting with each fiber in their path. The method is termed Monte Carlo Discontinuous Medium (MCDM). As compared to radiative continuum methods, the present approach does not require specification of extinction coefficient, scattering albedo, or scattering phase function. Instead, only volume fraction, fiber diameter, and fiber material complex index of refraction are required as parameters. Although the MCDM method is only strictly valid for the geometric limit, comparison with previous experiments on the edge of this limit (5 {lt} x {lt} 11) is qualitatively good. For the low (solid) volume fractions considered here, comparison is excellent between MCDM results and radiative continuum results, the later being solved by both Monte Carlo simulation and by exact integral solution of the Radiative Transfer Equation (RTE). MCDM results show a sensitivity to directional bias of the fibers in the medium, suggesting that bias parameters are necessary to solve radiative transfer in media with non-random fiber orientations. MCDM results for fibrous media are very similar to those for spherical suspensions at the same volume fraction and scatterer diameter, suggesting that the precise shape of a scattering particle may be relatively less important for radiation heat transfer through randomly oriented solid matrix materials.
Methods for Monte Carlo simulation of the exospheres of the moon and Mercury
Hodges, R. R., Jr.
1980-01-01
A general form of the integral equation of exospheric transport on moon-like bodies is derived in a form that permits arbitrary specification of time varying physical processes affecting atom creation and annihilation, atom-regolith collisions, adsorption and desorption, and nonplanetocentric acceleration. Because these processes usually defy analytic representation, the Monte Carlo method of solution of the transport equation, the only viable alternative, is described in detail, with separate discussions of the methods of specification of physical processes as probabalistic functions. Proof of the validity of the Monte Carlo exosphere simulation method is provided in the form of a comparison of analytic and Monte Carlo solutions to three classical, and analytically tractable, exosphere problems. One of the key phenomena in moonlike exosphere simulations, the distribution of velocities of the atoms leaving a regolith, depends mainly on the nature of collisions of free atoms with rocks. It is shown that on the moon and Mercury, elastic collisions of helium atoms with a Maxwellian distribution of vibrating, bound atoms produce a nearly Maxwellian distribution of helium velocities, despite the absence of speeds in excess of escape in the impinging helium velocity distribution.
Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation
Energy Technology Data Exchange (ETDEWEB)
Nilmeier, J. P.; Crooks, G. E.; Minh, D. D. L.; Chodera, J. D.
2011-10-24
Metropolis Monte Carlo simulation is a powerful tool for studying the equilibrium properties of matter. In complex condensed-phase systems, however, it is difficult to design Monte Carlo moves with high acceptance probabilities that also rapidly sample uncorrelated configurations. Here, we introduce a new class of moves based on nonequilibrium dynamics: candidate configurations are generated through a finite-time process in which a system is actively driven out of equilibrium, and accepted with criteria that preserve the equilibrium distribution. The acceptance rule is similar to the Metropolis acceptance probability, but related to the nonequilibrium work rather than the instantaneous energy difference. Our method is applicable to sampling from both a single thermodynamic state or a mixture of thermodynamic states, and allows both coordinates and thermodynamic parameters to be driven in nonequilibrium proposals. While generating finite-time switching trajectories incurs an additional cost, driving some degrees of freedom while allowing others to evolve naturally can lead to large enhancements in acceptance probabilities, greatly reducing structural correlation times. Using nonequilibrium driven processes vastly expands the repertoire of useful Monte Carlo proposals in simulations of dense solvated systems.
Accelerated rescaling of single Monte Carlo simulation runs with the Graphics Processing Unit (GPU).
Yang, Owen; Choi, Bernard
2013-01-01
To interpret fiber-based and camera-based measurements of remitted light from biological tissues, researchers typically use analytical models, such as the diffusion approximation to light transport theory, or stochastic models, such as Monte Carlo modeling. To achieve rapid (ideally real-time) measurement of tissue optical properties, especially in clinical situations, there is a critical need to accelerate Monte Carlo simulation runs. In this manuscript, we report on our approach using the Graphics Processing Unit (GPU) to accelerate rescaling of single Monte Carlo runs to calculate rapidly diffuse reflectance values for different sets of tissue optical properties. We selected MATLAB to enable non-specialists in C and CUDA-based programming to use the generated open-source code. We developed a software package with four abstraction layers. To calculate a set of diffuse reflectance values from a simulated tissue with homogeneous optical properties, our rescaling GPU-based approach achieves a reduction in computation time of several orders of magnitude as compared to other GPU-based approaches. Specifically, our GPU-based approach generated a diffuse reflectance value in 0.08ms. The transfer time from CPU to GPU memory currently is a limiting factor with GPU-based calculations. However, for calculation of multiple diffuse reflectance values, our GPU-based approach still can lead to processing that is ~3400 times faster than other GPU-based approaches.
Energy Technology Data Exchange (ETDEWEB)
Feck, Norbert; Wagner, Hermann-Josef [Bochum Univ. (Germany). Lehrstuhl fuer Energiesysteme und Energiewirtschaft
2008-07-01
Uncertainties of the data in lifecycle assesment of new energy systems can considered by an Monte-Carlo simulation. These uncertain data are reproduced by probability distributions. Afterwards a stochastic simulation is performed. The article presents the results of such a simulation for a geothermal heatplan using the hot-dry-rock-technology. (orig.)
Tzamicha, E; Yakoumakis, E; Tsalafoutas, I A; Dimitriadis, A; Georgiou, E; Tsapaki, V; Chalazonitis, A
2015-11-01
To estimate the mean glandular dose of contrast enhanced digital mammography, using the EGSnrc Monte Carlo code and female adult voxel phantom. Automatic exposure control of full field digital mammography system was used for the selection of the X-ray spectrum and the exposure settings for dual energy imaging. Measurements of the air-kerma and of the half value layers were performed and a Monte Carlo simulation of the digital mammography system was used to compute the mean glandular dose, for breast phantoms of various thicknesses, glandularities and for different X-ray spectra (low and high energy). For breast phantoms of 2.0-8.0 cm thick and 0.1-100% glandular fraction, CC view acquisition, from AEC settings, can result in a mean glandular dose of 0.450 ± 0.022 mGy -2.575 ± 0.033 mGy for low energy images and 0.061 ± 0.021 mGy - 0.232 ± 0.033 mGy for high energy images. In MLO view acquisition mean glandular dose values ranged between 0.488 ± 0.007 mGy - 2.080 ± 0.021 mGy for low energy images and 0.065 ± 0.012 mGy - 0.215 ± 0.010 mGy for high energy images. The low kV part of contrast enhanced digital mammography is the main contributor to total mean glandular breast dose. The results of this study can be used to provide an estimated mean glandular dose for individual cases. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Christelle Garnier
2008-05-01
Full Text Available We address the problem of phase noise (PHN and carrier frequency offset (CFO mitigation in multicarrier receivers. In multicarrier systems, phase distortions cause two effects: the common phase error (CPE and the intercarrier interference (ICI which severely degrade the accuracy of the symbol detection stage. Here, we propose a non-pilot-aided scheme to jointly estimate PHN, CFO, and multicarrier signal in time domain. Unlike existing methods, non-pilot-based estimation is performed without any decision-directed scheme. Our approach to the problem is based on Bayesian estimation using sequential Monte Carlo filtering commonly referred to as particle filtering. The particle filter is efficiently implemented by combining the principles of the Rao-Blackwellization technique and an approximate optimal importance function for phase distortion sampling. Moreover, in order to fully benefit from time-domain processing, we propose a multicarrier signal model which includes the redundancy information induced by the cyclic prefix, thus leading to a significant performance improvement. Simulation results are provided in terms of bit error rate (BER and mean square error (MSE to illustrate the efficiency and the robustness of the proposed algorithm.
An Implicit Monte Carlo Method for Simulation of Impurity Transport in Divertor Plasma
Suzuki, Akiko; Takizuka, Tomonori; Shimizu, Katsuhiro; Hayashi, Nobuhiko; Hatayama, Akiyoshi; Ogasawara, Masatada
1997-02-01
A new "implicit" Monte Carlo (IMC) method has been developed to simulate ionization and recombination processes of impurity ions in divertor plasmas. The IMC method takes into account many ionization and recombination processes during a time step Δ t. The time step is not limited by a condition, Δ t≪ τ min(τ min; the minimum characteristic time of atomic processes), which is forced to be adopted in conventional Monte Carlo methods. We incorporate this method into a one-dimensional impurity transport model. In this transport calculation, impurity ions are followed with the time step about 10 times larger than that used in conventional methods. The average charge state of impurities, , and the radiative cooling rate, L( Te), are calculated at the electron temperature Tein divertor plasmas. These results are compared with thosed obtained from the simple noncoronal model.
Geometry dependence of the sign problem in quantum Monte Carlo simulations
Iglovikov, V. I.; Khatami, E.; Scalettar, R. T.
2015-07-01
The sign problem is the fundamental limitation to quantum Monte Carlo simulations of the statistical mechanics of interacting fermions. Determinant quantum Monte Carlo (DQMC) is one of the leading methods to study lattice fermions, such as the Hubbard Hamiltonian, which describe strongly correlated phenomena including magnetism, metal-insulator transitions, and possibly exotic superconductivity. Here, we provide a comprehensive dataset on the geometry dependence of the DQMC sign problem for different densities, interaction strengths, temperatures, and spatial lattice sizes. We supplement these data with several observations concerning general trends in the data, including the dependence on spatial volume and how this can be probed by examining decoupled clusters, the scaling of the sign in the vicinity of a particle-hole symmetric point, and the correlation between the total sign and the signs for the individual spin species.
Auxiliary-Field Quantum Monte Carlo Simulations of Strongly-Correlated Molecules and Solids
Energy Technology Data Exchange (ETDEWEB)
Chang, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Morales, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-11-10
We propose a method of implementing projected wave functions for second-quantized auxiliary-field quantum Monte Carlo (AFQMC) techniques. The method is based on expressing the two-body projector as one-body terms coupled to binary Ising fields. To benchmark the method, we choose to study the two-dimensional (2D) one-band Hubbard model with repulsive interactions using the constrained-path MC (CPMC). The CPMC uses a trial wave function to guide the random walks so that the so-called fermion sign problem can be eliminated. The trial wave function also serves as the importance function in Monte Carlo sampling. As such, the quality of the trial wave function has a direct impact to the efficiency and accuracy of the simulations.
Power-feedwater temperature operating domain for Sbwr applying Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Aguilar M, L. A.; Quezada G, S.; Espinosa M, E. G.; Vazquez R, A.; Varela H, J. R.; Cazares R, R. I.; Espinosa P, G., E-mail: sequega@gmail.com [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Mexico D. F. (Mexico)
2014-10-15
In this work the analyses of the feedwater temperature effects on reactor power in a simplified boiling water reactor (Sbwr) applying a methodology based on Monte Carlo simulation is presented. The Monte Carlo methodology was applied systematically to establish operating domain, due that the Sbwr are not yet in operation, the analysis of the nuclear and thermal-hydraulic processes must rely on numerical modeling, with the purpose of developing or confirming the design basis and qualifying the existing or new computer codes to enable reliable analyses. The results show that the reactor power is inversely proportional to the temperature of the feedwater, reactor power changes at 8% when the feed water temperature changes in 8%. (Author)
Allam, Kh A
2017-12-01
In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3-19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Acceleration of Monte Carlo simulations through spatial updating in the grand canonical ensemble.
Orkoulas, G
2007-08-28
A new grand canonical Monte Carlo algorithm for continuum fluid models is proposed. The method is based on a generalization of sequential Monte Carlo algorithms for lattice gas systems. The elementary moves, particle insertions and removals, are constructed by analogy with those of a lattice gas. The updating is implemented by selecting points in space (spatial updating) either at random or in a definitive order (sequential). The type of move, insertion or removal, is deduced based on the local environment of the selected points. Results on two-dimensional square-well fluids indicate that the sequential version of the proposed algorithm converges faster than standard grand canonical algorithms for continuum fluids. Due to the nature of the updating, additional reduction of simulation time may be achieved by parallel implementation through domain decomposition.
A study on T-shape Compton suppression spectrometer by Monte Carlo simulation
Kiang, L. L.; Tsou, R. H.; Lin, W. J.; Lin, Simon C.; Kiang, G. C.; Teng, P. K.; Li, S. D.
1993-04-01
The geometrical parameters which affect the suppression factor for a T-shape HPGe-NaI(Tl) Compton suppression spectrometer are studied by a Monte Carlo simulation method. We simulate two combinations of this sort of Compton suppression spectrometer (113 and 146 cm 3 high-purity germanium detector is inserted into 22.9 cm Ø × 25.4 and 14.0 cm Ø × 17.8 cm annular NaI(Tl) shield detector, respectively). The simulation results of suppression factor are consistent with the results from the experiment. The optimal position of the HPGe detector within the shield is about at the half length of the radius of the NaI(Tl) detector for Compton suppression. Further simulation finds that the suppression factor is independent of the length of the shield but dependent on its radius. Other geometrical topologies for the shield are also discussed.
Direct simulation Monte Carlo method for gas cluster ion beam technology
Insepov, Z
2003-01-01
A direct simulation Monte Carlo method has been developed and applied for the simulation of a supersonic Ar gas expansion through a converging-diverging nozzle, with the stagnation pressures of P sub 0 =0.1-10 atm, at various temperatures. A body-fitted coordinate system has been developed that allows modeling nozzles of arbitrary shape. A wide selection of nozzle sizes, apex angles, with diffuse and specular atomic reflection laws from the nozzle walls, has been studied. The results of nozzle simulation were used to obtain a scaling law P sub 0 T sub 0 sup 1 sup 9 sup / sup 8 d supalpha L sub n supbeta=const. for the constant mean cluster sizes that are formed in conical nozzles. The Hagena's formula, valid for the conical nozzles with a constant length, has further been extended to the conical nozzles with variable lengths, based on our simulation results.
Monte Carlo simulation of the response of a pixellated 3D photo-detector in silicon
Dubaric, E; Froejdh, C; Norlin, B
2002-01-01
The charge transport and X-ray photon absorption in three-dimensional (3D) X-ray pixel detectors have been studied using numerical simulations. The charge transport has been modelled using the drift-diffusion simulator MEDICI, while photon absorption has been studied using MCNP. The response of the entire pixel detector system in terms of charge sharing, line spread function and modulation transfer function, has been simulated using a system level Monte Carlo simulation approach. A major part of the study is devoted to the effect of charge sharing on the energy resolution in 3D-pixel detectors. The 3D configuration was found to suppress charge sharing much better than conventional planar detectors.
Ab initio molecular dynamics simulation of liquid water by quantum Monte Carlo
Energy Technology Data Exchange (ETDEWEB)
Zen, Andrea, E-mail: a.zen@ucl.ac.uk [Dipartimento di Fisica, “La Sapienza” - Università di Roma, piazzale Aldo Moro 5, 00185 Rome (Italy); London Centre for Nanotechnology, University College London, London WC1E 6BT (United Kingdom); Luo, Ye, E-mail: xw111luoye@gmail.com; Mazzola, Guglielmo, E-mail: gmazzola@phys.ethz.ch; Sorella, Sandro, E-mail: sorella@sissa.it [SISSA–International School for Advanced Studies, Via Bonomea 26, 34136 Trieste (Italy); Democritos Simulation Center CNR–IOM Istituto Officina dei Materiali, 34151 Trieste (Italy); Guidoni, Leonardo, E-mail: leonardo.guidoni@univaq.it [Dipartimento di Fisica, “La Sapienza” - Università di Roma, piazzale Aldo Moro 5, 00185 Rome (Italy); Dipartimento di Scienze Fisiche e Chimiche, Università degli Studi dell’ Aquila, via Vetoio, 67100 L’ Aquila (Italy)
2015-04-14
Although liquid water is ubiquitous in chemical reactions at roots of life and climate on the earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article, we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in good agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous density functional theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab initio simulations of complex chemical systems.
Monte Carlo simulation of heavy ion induced kinetic electron emission from an Al surface
Ohya, K
2002-01-01
A Monte Carlo simulation is performed in order to study heavy ion induced kinetic electron emission from an Al surface. In the simulation, excitation of conduction band electrons by the projectile ion and recoiling target atoms is treated on the basis of the partial wave expansion method, and the cascade multiplication process of the excited electrons is simulated as well as collision cascade of the recoiling target atoms. Experimental electron yields near conventional threshold energies of heavy ions are simulated by an assumption of a lowering in the apparent surface barrier for the electrons. The present calculation derives components for electron excitations by the projectile ion, the recoiling target atoms and the electron cascades, from the calculated total electron yield. The component from the recoiling target atoms increases with increasing projectile mass, whereas the component from the electron cascade decreases. Although the components from the projectile ion and the electron cascade increase with...
Modeling turbulence in underwater wireless optical communications based on Monte Carlo simulation.
Vali, Zahra; Gholami, Asghar; Ghassemlooy, Zabih; Michelson, David G; Omoomi, Masood; Noori, Hamed
2017-07-01
Turbulence affects the performance of underwater wireless optical communications (UWOC). Although multiple scattering and absorption have been previously investigated by means of physical simulation models, still a physical simulation model is needed for UWOC with turbulence. In this paper, we propose a Monte Carlo simulation model for UWOC in turbulent oceanic clear water, which is far less computationally intensive than approaches based on computational fluid dynamics. The model is based on the variation of refractive index in a horizontal link. Results show that the proposed simulation model correctly reproduces lognormal probability density function of the received intensity for weak and moderate turbulence regimes. Results presented match well with experimental data reported for weak turbulence. Furthermore, scintillation index and turbulence-induced power loss versus link span are exhibited for different refractive index variations.
Computational physics an introduction to Monte Carlo simulations of matrix field theory
Ydri, Badis
2017-01-01
This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...
Implementation of the modified Monte Carlo simulation for evaluate the barrier option prices
Directory of Open Access Journals (Sweden)
Kazem Nouri
2017-03-01
Full Text Available In this paper, we apply an improved version of Monte Carlo methods to pricing barrier options. This kind of options may match with risk hedging needs more closely than standard options. Barrier options behave like a plain vanilla option with one exception. A zero payoff may occur before expiry, if the option ceases to exist; accordingly, barrier options are cheaper than similar standard vanilla options. We apply a new Monte Carlo method to compute the prices of single and double barrier options written on stocks. The basic idea of the new method is to use uniformly distributed random numbers and an exit probability in order to perform a robust estimation of the first time the stock price hits the barrier. Using uniformly distributed random numbers decreases the estimation of first hitting time error in comparison with standard Monte Carlo or similar methods. It is numerically shown that the answer of our method is closer to the exact value and the first hitting time error is reduced.
Parameter estimation in channel network flow simulation
Directory of Open Access Journals (Sweden)
Han Longxi
2008-03-01
Full Text Available Simulations of water flow in channel networks require estimated values of roughness for all the individual channel segments that make up a network. When the number of individual channel segments is large, the parameter calibration workload is substantial and a high level of uncertainty in estimated roughness cannot be avoided. In this study, all the individual channel segments are graded according to the factors determining the value of roughness. It is assumed that channel segments with the same grade have the same value of roughness. Based on observed hydrological data, an optimal model for roughness estimation is built. The procedure of solving the optimal problem using the optimal model is described. In a test of its efficacy, this estimation method was applied successfully in the simulation of tidal water flow in a large complicated channel network in the lower reach of the Yangtze River in China.
Validation of cross sections for Monte Carlo simulation of the photoelectric effect
Han, Min Cheol; Pia, Maria Grazia; Basaglia, Tullio; Batic, Matej; Hoff, Gabriela; Kim, Chan Hyeong; Saracco, Paolo
2016-01-01
Several total and partial photoionization cross section calculations, based on both theoretical and empirical approaches, are quantitatively evaluated with statistical analyses using a large collection of experimental data retrieved from the literature to identify the state of the art for modeling the photoelectric effect in Monte Carlo particle transport. Some of the examined cross section models are available in general purpose Monte Carlo systems, while others have been implemented and subjected to validation tests for the first time to estimate whether they could improve the accuracy of particle transport codes. The validation process identifies Scofield's 1973 non-relativistic calculations, tabulated in the Evaluated Photon Data Library(EPDL), as the one best reproducing experimental measurements of total cross sections. Specialized total cross section models, some of which derive from more recent calculations, do not provide significant improvements. Scofield's non-relativistic calculations are not surp...
Monte Carlo simulations of multiphase flow incorporating spatial variability of hydraulic properties
Essaid, Hedeff I.; Hess, Kathryn M.
1993-01-01
To study the effect of spatial variability of sediment hydraulic properties on multiphase flow, oil infiltration into a hypothetical glacial outwash aquifer, followed by oil extraction, was simulated using a cross-sectional multiphase flow model. The analysis was simplified by neglecting capillary hysteresis. The first simulation used a uniform mean permeability and mean retention curve. This was followed by 50 Monte Carlo simulations conducted using 50 spatially variable permeability realizations and corresponding spatially variable retention curves. For the type of correlation structure considered in this study, which is similar to that of glacial outwash deposits, use of mean hydraulic properties reproduces the ensemble average oil saturation distribution obtained from the Monte Carlo simulations. However, spatial variability causes the oil saturation distribution in an individual oil lens to differ significantly from that of the mean lens. Oil saturations at a given location may be considerably higher than would be predicted using uniform mean properties. During cleanup by oil extraction from a well, considerably more oil may remain behind in the heterogeneous case than in the spatially uniform case.
Srna - Monte Carlo codes for proton transport simulation in combined and voxelized geometries
Directory of Open Access Journals (Sweden)
Ilić Radovan D.
2002-01-01
Full Text Available This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtained through the PETRA and GEANT programs. The simulation of the proton beam characterization by means of the Multi-Layer Faraday Cup and spatial distribution of positron emitters obtained by our program indicate the imminent application of Monte Carlo techniques in clinical practice.
Monte Carlo Simulation Of Soot Evolution along Lagrangian Trajectories in a Turbulent Flame
Abdelgadir, Ahmed; Zhou, Kun; Attili, Antonio; Bisetti, Fabrizio
2013-11-01
A newly developed Monte Carlo method is used to simulate soot formation and growth in a turbulent n-heptane/air flame. The Monte Carlo method is used to simulate the soot evolution along selected Lagragnian trajectories obtained from a direct numerical simulation of a turbulent sooting jet flame [Attili et al., Direct and Large-Eddy Simulation 9, Springer, 2013] based on a high-order method of moments. The method adopts an operator splitting approach, which splits the deterministic processes (nucleation, surface growth and oxidation) from coagulation, which is treated stochastically. The purpose of this work is to assess the solution based on the moment method and to investigate the soot particle size distribution (PSD) that is not available in methods of moments. Nucleation and coagulation have the greatest effect on the PSD, therefore, various coagulation models are considered. Along each trajectory, one or more rapid nucleation events occur, affecting the shape of the PSD. It is shown that oxidation and surface growth affect the PSD quantitatively, but do not change the shape significantly.
CAD-based Monte Carlo Program for Integrated Simulation of Nuclear System SuperMC
Wu, Yican; Song, Jing; Zheng, Huaqing; Sun, Guangyao; Hao, Lijuan; Long, Pengcheng; Hu, Liqin
2014-06-01
Monte Carlo (MC) method has distinct advantages to simulate complicated nuclear systems and is envisioned as routine method for nuclear design and analysis in the future. High fidelity simulation with MC method coupled with multi-physical phenomenon simulation has significant impact on safety, economy and sustainability of nuclear systems. However, great challenges to current MC methods and codes prevent its application in real engineering project. SuperMC is a CAD-based Monte Carlo program for integrated simulation of nuclear system developed by FDS Team, China, making use of hybrid MC-deterministic method and advanced computer technologies. The design aim, architecture and main methodology of SuperMC were presented in this paper. SuperMC2.1, the latest version for neutron, photon and coupled neutron and photon transport calculation, has been developed and validated by using a series of benchmarking cases such as the fusion reactor ITER model and the fast reactor BN-600 model. SuperMC is still in its evolution process toward a general and routine tool for nuclear system. Warning, no authors found for 2014snam.conf06023.
Monte Carlo simulation of near infrared autofluorescence measurements of in vivo skin.
Wang, Shuang; Zhao, Jianhua; Lui, Harvey; He, Qingli; Zeng, Haishan
2011-12-02
The autofluorescence properties of normal human skin in the near-infrared (NIR) spectral range were studied using Monte Carlo simulation. The light-tissue interactions including scattering, absorption and anisotropy propagation of the regenerated autofluorescence photons in the skin tissue were taken into account in the theoretical modeling. Skin was represented as a turbid seven-layered medium. To facilitate the simulation, ex vivo NIR autofluorescence spectra and images from different skin layers were measured from frozen skin vertical sections to define the intrinsic fluorescence properties. Monte Carlo simulation was then used to study how the intrinsic fluorescence spectra were distorted by the tissue reabsorption and scattering during in vivo measurements. We found that the reconstructed model skin spectra were in good agreement with the measured in vivo skin spectra from the same anatomical site as the ex vivo tissue sections, demonstrating the usefulness of this modeling. We also found that difference exists over the melanin fluorescent wavelength range (880-910 nm) between the simulated spectrum and the measured in vivo skin spectrum from a different anatomical site. This difference suggests that melanin contents may affect in vivo skin autofluorescence properties, which deserves further investigation. Copyright © 2011 Elsevier B.V. All rights reserved.
Gong, Xingchu; Li, Yao; Chen, Huali; Qu, Haibin
2015-01-01
A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs). Extraction number, extraction time, and the mass ratio of water and material (W/M ratio) were selected as critical process parameters (CPPs). Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10,000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes.
A constant-time kinetic Monte Carlo algorithm for simulation of large biochemical reaction networks
Slepoy, Alexander; Thompson, Aidan P.; Plimpton, Steven J.
2008-05-01
The time evolution of species concentrations in biochemical reaction networks is often modeled using the stochastic simulation algorithm (SSA) [Gillespie, J. Phys. Chem. 81, 2340 (1977)]. The computational cost of the original SSA scaled linearly with the number of reactions in the network. Gibson and Bruck developed a logarithmic scaling version of the SSA which uses a priority queue or binary tree for more efficient reaction selection [Gibson and Bruck, J. Phys. Chem. A 104, 1876 (2000)]. More generally, this problem is one of dynamic discrete random variate generation which finds many uses in kinetic Monte Carlo and discrete event simulation. We present here a constant-time algorithm, whose cost is independent of the number of reactions, enabled by a slightly more complex underlying data structure. While applicable to kinetic Monte Carlo simulations in general, we describe the algorithm in the context of biochemical simulations and demonstrate its competitive performance on small- and medium-size networks, as well as its superior constant-time performance on very large networks, which are becoming necessary to represent the increasing complexity of biochemical data for pathways that mediate cell function.
Energy Technology Data Exchange (ETDEWEB)
Smekens, F; Freud, N; Letang, J M; Babot, D [CNDRI (Nondestructive Testing using Ionizing Radiations) Laboratory, INSA-Lyon, 69621 Villeurbanne Cedex (France); Adam, J-F; Elleaume, H; Esteve, F [INSERM U-836, Equipe 6 ' Rayonnement Synchrotron et Recherche Medicale' , Institut des Neurosciences de Grenoble (France); Ferrero, C; Bravin, A [European Synchrotron Radiation Facility, Grenoble (France)], E-mail: francois.smekens@insa-lyon.fr
2009-08-07
A hybrid approach, combining deterministic and Monte Carlo (MC) calculations, is proposed to compute the distribution of dose deposited during stereotactic synchrotron radiation therapy treatment. The proposed approach divides the computation into two parts: (i) the dose deposited by primary radiation (coming directly from the incident x-ray beam) is calculated in a deterministic way using ray casting techniques and energy-absorption coefficient tables and (ii) the dose deposited by secondary radiation (Rayleigh and Compton scattering, fluorescence) is computed using a hybrid algorithm combining MC and deterministic calculations. In the MC part, a small number of particle histories are simulated. Every time a scattering or fluorescence event takes place, a splitting mechanism is applied, so that multiple secondary photons are generated with a reduced weight. The secondary events are further processed in a deterministic way, using ray casting techniques. The whole simulation, carried out within the framework of the Monte Carlo code Geant4, is shown to converge towards the same results as the full MC simulation. The speed of convergence is found to depend notably on the splitting multiplicity, which can easily be optimized. To assess the performance of the proposed algorithm, we compare it to state-of-the-art MC simulations, accelerated by the track length estimator technique (TLE), considering a clinically realistic test case. It is found that the hybrid approach is significantly faster than the MC/TLE method. The gain in speed in a test case was about 25 for a constant precision. Therefore, this method appears to be suitable for treatment planning applications.
Directory of Open Access Journals (Sweden)
Vincenza Di Stefano
2009-11-01
Full Text Available The Multicomb variance reduction technique has been introduced in the Direct Monte Carlo Simulation for submicrometric semiconductor devices. The method has been implemented in bulk silicon. The simulations show that the statistical variance of hot electrons is reduced with some computational cost. The method is efficient and easy to implement in existing device simulators.
The Cherenkov Telescope Array production system for Monte Carlo simulations and analysis
Arrabito, L.; Bernloehr, K.; Bregeon, J.; Cumani, P.; Hassan, T.; Haupt, A.; Maier, G.; Moralejo, A.; Neyroud, N.; pre="for the"> CTA Consortium,
2017-10-01
The Cherenkov Telescope Array (CTA), an array of many tens of Imaging Atmospheric Cherenkov Telescopes deployed on an unprecedented scale, is the next-generation instrument in the field of very high energy gamma-ray astronomy. An average data stream of about 0.9 GB/s for about 1300 hours of observation per year is expected, therefore resulting in 4 PB of raw data per year and a total of 27 PB/year, including archive and data processing. The start of CTA operation is foreseen in 2018 and it will last about 30 years. The installation of the first telescopes in the two selected locations (Paranal, Chile and La Palma, Spain) will start in 2017. In order to select the best site candidate to host CTA telescopes (in the Northern and in the Southern hemispheres), massive Monte Carlo simulations have been performed since 2012. Once the two sites have been selected, we have started new Monte Carlo simulations to determine the optimal array layout with respect to the obtained sensitivity. Taking into account that CTA may be finally composed of 7 different telescope types coming in 3 different sizes, many different combinations of telescope position and multiplicity as a function of the telescope type have been proposed. This last Monte Carlo campaign represented a huge computational effort, since several hundreds of telescope positions have been simulated, while for future instrument response function simulations, only the operating telescopes will be considered. In particular, during the last 18 months, about 2 PB of Monte Carlo data have been produced and processed with different analysis chains, with a corresponding overall CPU consumption of about 125 M HS06 hours. In these proceedings, we describe the employed computing model, based on the use of grid resources, as well as the production system setup, which relies on the DIRAC interware. Finally, we present the envisaged evolutions of the CTA production system for the off-line data processing during CTA operations and
Dwarf galaxy mass estimators versus cosmological simulations
González-Samaniego, Alejandro; Bullock, James S.; Boylan-Kolchin, Michael; Fitts, Alex; Elbert, Oliver D.; Hopkins, Philip F.; Kereš, Dušan; Faucher-Giguère, Claude-André
2017-12-01
We use a suite of high-resolution cosmological dwarf galaxy simulations to test the accuracy of commonly used mass estimators from Walker et al. (2009) and Wolf et al. (2010), both of which depend on the observed line-of-sight velocity dispersion and the 2D half-light radius of the galaxy, Re. The simulations are part of the Feedback in Realistic Environments (FIRE) project and include 12 systems with stellar masses spanning 105-107 M⊙ that have structural and kinematic properties similar to those of observed dispersion-supported dwarfs. Both estimators are found to be quite accurate: M_Wolf/M_true = 0.98^{+0.19}_{-0.12} and M_Walker/M_true =1.07^{+0.21}_{-0.15}, with errors reflecting the 68 per cent range over all simulations. The excellent performance of these estimators is remarkable given that they each assume spherical symmetry, a supposition that is broken in our simulated galaxies. Though our dwarfs have negligible rotation support, their 3D stellar distributions are flattened, with short-to-long axis ratios c/a ≃ 0.4-0.7. The median accuracy of the estimators shows no trend with asphericity. Our simulated galaxies have sphericalized stellar profiles in 3D that follow a nearly universal form, one that transitions from a core at small radius to a steep fall-off ∝r-4.2 at large r; they are well fit by Sérsic profiles in projection. We find that the most important empirical quantity affecting mass estimator accuracy is Re. Determining Re by an analytic fit to the surface density profile produces a better estimated mass than if the half-light radius is determined via direct summation.
Zou, Yonghong; Christensen, Erik R; Zheng, Wei; Wei, Hua; Li, An
2014-11-01
A stochastic process was developed to simulate the stepwise debromination pathways for polybrominated diphenyl ethers (PBDEs). The stochastic process uses an analogue Markov Chain Monte Carlo (AMCMC) algorithm to generate PBDE debromination profiles. The acceptance or rejection of the randomly drawn stepwise debromination reactions was determined by a maximum likelihood function. The experimental observations at certain time points were used as target profiles; therefore, the stochastic processes are capable of presenting the effects of reaction conditions on the selection of debromination pathways. The application of the model is illustrated by adopting the experimental results of decabromodiphenyl ether (BDE209) in hexane exposed to sunlight. Inferences that were not obvious from experimental data were suggested by model simulations. For example, BDE206 has much higher accumulation at the first 30 min of sunlight exposure. By contrast, model simulation suggests that, BDE206 and BDE207 had comparable yields from BDE209. The reason for the higher BDE206 level is that BDE207 has the highest depletion in producing octa products. Compared to a previous version of the stochastic model based on stochastic reaction sequences (SRS), the AMCMC approach was determined to be more efficient and robust. Due to the feature of only requiring experimental observations as input, the AMCMC model is expected to be applicable to a wide range of PBDE debromination processes, e.g. microbial, photolytic, or joint effects in natural environments. Copyright © 2014 Elsevier Ltd. All rights reserved.
Jarvis, N.; Kreuger, J.; Lindahl, A.; Gärdenäs, A.; Alavi, G.; Roulier, S.
The objective of this study was to identify the main controls on the export of pesticides from a small agricultural catchment in south Sweden (Vemmenhög, 9 km2), specif- ically highlighting and contrasting the impacts of management practices (pesticide application timing and dose), spatial variation in soil properties, and the significance of small-scale properties related to soil structure and macropore flow. A process-based simulation model (MACRO) applicable to the soil profile scale was used to estimate diffuse leaching. Model parameterisation was based on comprehensive data on soil properties and management practices collected in the catchment from soil surveys and farmer questionaires, and column breakthrough experiments for a non-reactive tracer and the herbicide MCPA. Results are presented from two different approaches to up- scaling model predictions, i.) stochastic (Monte Carlo) simulations based on latin hy- percube sampling, accounting for parameter correlation using pedotransfer functions derived from local data, and ii.) deterministic aggregation based on three landscape 'elements': hilltops, midslopes and hollows. Model simulations were compared with concentrations of MCPA measured at the field-scale in tile drain outflow, and at the catchment outlet. The contribution from point sources is estimated from the difference between the measured export of MCPA from the catchment and the upscaled model predictions.
Energy Technology Data Exchange (ETDEWEB)
Ghoos, K., E-mail: kristel.ghoos@kuleuven.be [KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300A, 3001 Leuven (Belgium); Dekeyser, W. [KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300A, 3001 Leuven (Belgium); Samaey, G. [KU Leuven, Department of Computer Science, Celestijnenlaan 200A, 3001 Leuven (Belgium); Börner, P. [Institute of Energy and Climate Research (IEK-4), FZ Jülich GmbH, D-52425 Jülich (Germany); Baelmans, M. [KU Leuven, Department of Mechanical Engineering, Celestijnenlaan 300A, 3001 Leuven (Belgium)
2016-10-01
The plasma and neutral transport in the plasma edge of a nuclear fusion reactor is usually simulated using coupled finite volume (FV)/Monte Carlo (MC) codes. However, under conditions of future reactors like ITER and DEMO, convergence issues become apparent. This paper examines the convergence behaviour and the numerical error contributions with a simplified FV/MC model for three coupling techniques: Correlated Sampling, Random Noise and Robbins Monro. Also, practical procedures to estimate the errors in complex codes are proposed. Moreover, first results with more complex models show that an order of magnitude speedup can be achieved without any loss in accuracy by making use of averaging in the Random Noise coupling technique.
Improving computational efficiency of Monte-Carlo simulations with variance reduction
Turner, A
2013-01-01
CCFE perform Monte-Carlo transport simulations on large and complex tokamak models such as ITER. Such simulations are challenging since streaming and deep penetration effects are equally important. In order to make such simulations tractable, both variance reduction (VR) techniques and parallel computing are used. It has been found that the application of VR techniques in such models significantly reduces the efficiency of parallel computation due to 'long histories'. VR in MCNP can be accomplished using energy-dependent weight windows. The weight window represents an 'average behaviour' of particles, and large deviations in the arriving weight of a particle give rise to extreme amounts of splitting being performed and a long history. When running on parallel clusters, a long history can have a detrimental effect on the parallel efficiency - if one process is computing the long history, the other CPUs complete their batch of histories and wait idle. Furthermore some long histories have been found to be effect...
Wetting of polymer liquids: Monte Carlo simulations and self-consistent field calculations
Müller, M
2003-01-01
Using Monte Carlo simulations and self-consistent field (SCF) theory we study the surface and interface properties of a coarse grained off-lattice model. In the simulations we employ the grand canonical ensemble together with a reweighting scheme in order to measure surface and interface free energies and discuss various methods for accurately locating the wetting transition. In the SCF theory, we use a partial enumeration scheme to incorporate single-chain properties on all length scales and use a weighted density functional for the excess free energy. The results of various forms of the density functional are compared quantitatively to the simulation results. For the theory to be accurate, it is important to decompose the free energy functional into a repulsive and an attractive part, with different approximations for the two parts. Measuring the effective interface potential for our coarse grained model we explore routes for controlling the equilibrium wetting properties. (i) Coating of the substrate by an...
Lessons from Monte Carlo simulations of the performance of a dual-readout fiber calorimeter
Akchurin, N; Cardini, A; Cascella, M; De Pedis, D; Ferrari, R; Fracchia, S; Franchino, S; Fraternali, M; Gaudio, G; Genova, P; Hauptman, J; La Rotonda, L; Lee, S; Livan, M; Meoni, E; Pinci, D; Policicchio, A; Saraiva, J G; Scuri, F; Sill, A; Venturelli, T; Wigmans, R
2014-01-01
The RD52 calorimeter uses the dual-readout principle to detect both electromagnetic and hadronic showers, as well as muons. Scintillation and Cherenkov light provide the two signals which, in combination, allow for superior hadronic performance. In this paper, we report on detailed, GEANT4 based Monte Carlo simulations of the performance of this instrument. The results of these simulations are compared in great detail to measurements that have been carried out and published by the DREAM Collaboration. This comparison makes it possible to understand subtle details of the shower development in this unusual particle detector. It also allows for predictions of the improvement in the performance that may be expected for larger detectors of this type. These studies also revealed some inadequacies in the GEANT4 simulation packages, especially for hadronic showers, but also for the Cherenkov signals from electromagnetic showers.
Monte Carlo simulations of microchannel plate detectors. I. Steady-state voltage bias results.
Wu, Ming; Kruschwitz, Craig A; Morgan, Dane V; Morgan, Jiaming
2008-07-01
X-ray detectors based on straight-channel microchannel plates (MCPs) are a powerful diagnostic tool for two-dimensional, time-resolved imaging and time-resolved x-ray spectroscopy in the fields of laser-driven inertial confinement fusion and fast Z-pinch experiments. Understanding the behavior of microchannel plates as used in such detectors is critical to understanding the data obtained. The subject of this paper is a Monte Carlo computer code we have developed to simulate the electron cascade in a MCP under a static applied voltage. Also included in the simulation is elastic reflection of low-energy electrons from the channel wall, which is important at lower voltages. When model results were compared to measured MCP sensitivities, good agreement was found. Spatial resolution simulations of MCP-based detectors were also presented and found to agree with experimental measurements.
Monte Carlo Sampling with Linear Inverse Kinematics for Simulation of Protein Flexible Regions.
Hayward, Steven; Kitao, Akio
2015-08-11
A Monte Carlo linear inverse-kinematics method for the simulation of protein chains with fixed ends is introduced. It includes backbone bond-angle bending and simultaneous loop and ring closure to allow full proline ring flexibility. An obstacle to linear null-space methods is the eventual drift of the end group. Maintenance of the end group at its initial position by occasional reset is performed in a way that is consistent with the overall methodology and minimally disruptive to the current conformation. The implementation permitted multiple rigid regions within the chain, enabling the simulation of domain movements where domains are rigid bodies connected by flexible interdomain regions. The method was tested on polyalanine, polyglycine, loop 6 of triosephosphate isomerase, and glutamine binding protein. Simulations of glutamine binding protein, where only 11 of the 226 residues at the interdomain bending regions were flexible, accurately reproduced the experimentally determined domain movement.
Hybrid Monte-Carlo simulation of interacting tight-binding model of graphene
Smith, Dominik
2013-01-01
In this work, results are presented of Hybrid-Monte-Carlo simulations of the tight-binding Hamiltonian of graphene, coupled to an instantaneous long-range two-body potential which is modeled by a Hubbard-Stratonovich auxiliary field. We present an investigation of the spontaneous breaking of the sublattice symmetry, which corresponds to a phase transition from a conducting to an insulating phase and which occurs when the effective fine-structure constant $\\alpha$ of the system crosses above a certain threshold $\\alpha_C$. Qualitative comparisons to earlier works on the subject (which used larger system sizes and higher statistics) are made and it is established that $\\alpha_C$ is of a plausible magnitude in our simulations. Also, we discuss differences between simulations using compact and non-compact variants of the Hubbard field and present a quantitative comparison of distinct discretization schemes of the Euclidean time-like dimension in the Fermion operator.
Directory of Open Access Journals (Sweden)
Mansoor Ahmed Siddiqui
2017-06-01
Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.
Ohzeki, Masayuki
2017-01-23
Quantum annealing is a generic solver of the optimization problem that uses fictitious quantum fluctuation. Its simulation in classical computing is often performed using the quantum Monte Carlo simulation via the Suzuki-Trotter decomposition. However, the negative sign problem sometimes emerges in the simulation of quantum annealing with an elaborate driver Hamiltonian, since it belongs to a class of non-stoquastic Hamiltonians. In the present study, we propose an alternative way to avoid the negative sign problem involved in a particular class of the non-stoquastic Hamiltonians. To check the validity of the method, we demonstrate our method by applying it to a simple problem that includes the anti-ferromagnetic XX interaction, which is a typical instance of the non-stoquastic Hamiltonians.
Drixler, Fabian F
2015-04-01
This article quantifies the frequency of infanticide and abortion in one region of Japan by comparing observed fertility in a sample of 4.9 million person-years (1660-1872) with a Monte Carlo simulation of how many conceptions and births that population should have experienced. The simulation uses empirical values for the determinants of fertility from Eastern Japan itself as well as the best available studies of comparable populations. This procedure reveals that in several decades of the eighteenth century, at least 40% of pregnancies must have ended in either an induced abortion or an infanticide. In addition, the simulation results imply a rapid decline in the incidence of infanticide and abortion during the nineteenth century, when in a reverse fertility transition, this premodern family-planning regime gave way to a new age of large families.
Light-emitting diode lamp design by Monte Carlo photon simulation
Lee, Song Jae
2001-05-01
In this presentation, basic elements of light-emitting diode (LED) lamp design are discussed. In practical applications of LED lamps, the far-field photon distribution pattern is one of the important considerations. Both the reflecting cup and lens surface profile employed in the design can be flexibly adjusted by a few parameters such that the far field photon distribution pattern is rather easily manipulated. For simulation of LED lamps, we have used Monte Carlo photon simulation method. Based on simulation results, we can verify or explain the effect of the various LED lamp design parameters on far-field patterns. Some of the important design examples are LED lamps with far-field patterns that are either tilted by certain angle in the vertical direction or double- lobed in the horizontal direction. LED lamps of this type of far-field patterns may find some application in some special outdoor displays, for instance, in a large stadium.
DSMC calculations for the double ellipse. [direct simulation Monte Carlo method
Moss, James N.; Price, Joseph M.; Celenligil, M. Cevdet
1990-01-01
The direct simulation Monte Carlo (DSMC) method involves the simultaneous computation of the trajectories of thousands of simulated molecules in simulated physical space. Rarefied flow about the double ellipse for test case 6.4.1 has been calculated with the DSMC method of Bird. The gas is assumed to be nonreacting nitrogen flowing at a 30 degree incidence with respect to the body axis, and for the surface boundary conditions, the wall is assumed to be diffuse with full thermal accommodation and at a constant wall temperature of 620 K. A parametric study is presented that considers the effect of variations of computational domain, gas model, cell size, and freestream density on surface quantities.
Ohzeki, Masayuki
2017-01-01
Quantum annealing is a generic solver of the optimization problem that uses fictitious quantum fluctuation. Its simulation in classical computing is often performed using the quantum Monte Carlo simulation via the Suzuki–Trotter decomposition. However, the negative sign problem sometimes emerges in the simulation of quantum annealing with an elaborate driver Hamiltonian, since it belongs to a class of non-stoquastic Hamiltonians. In the present study, we propose an alternative way to avoid the negative sign problem involved in a particular class of the non-stoquastic Hamiltonians. To check the validity of the method, we demonstrate our method by applying it to a simple problem that includes the anti-ferromagnetic XX interaction, which is a typical instance of the non-stoquastic Hamiltonians. PMID:28112244
Migration of Monte Carlo simulation of high energy atmospheric showers to GRID infrastructure
Energy Technology Data Exchange (ETDEWEB)
Vazquez, Adolfo; Contreras, Jose Luis [Grupo de Altas EnergIas Departamento de Fisica Atomica, Molecular y Nuclear Universidad Complutense de Madrid Avenida Complutense s/n, 28040 Madrid - Spain (Spain); Calle, Ignacio de la; Ibarra, Aitor; Tapiador, Daniel, E-mail: avazquez@gae.ucm.e [INSA. IngenierIa y Servicios Aeroespaciales S.A. Paseo Pintor Rosales 34, 28008 Madrid - Spain (Spain)
2010-04-01
A system to run Monte Carlo simulations on a Grid environment is presented. The architectural design proposed uses the current resources of the MAGIC Virtual Organization on EGEE and can be easily generalized to support the simulation of any similar experiment, such as that of the future European planned project, the Cherenkov Telescope Array. The proposed system is based on a Client/Server architecture, and provides the user with a single access point to the simulation environment through a remote graphical user interface, the Client. The Client can be accessed via web browser, using web service technology, with no additional software installation on the user side required. The Server processes the user request and uses a database for both data catalogue and job management inside the Grid. The design, first production tests and lessons learned from the system will be discussed here.
Zhao, Huijuan; Zhang, Shunqi; Wang, Zhaoxia; Miao, Hui; Du, Zhen; Jiang, Jingying
2008-02-01
This article aims at the optical parameter reconstruction technology for the frequency- domain measurement of near-infrared diffused light. For mimicking the cervix, a cylindrical model with hole in the middle is used in the simulation and experiments. Concerning the structure of the cervix, Monte-Carlo simulation is adopted for describing the photon migration in tissue and Perturbation Monte-Carlo is used for the reconstruction of the optical properties of cervix. The difficulties in the reconstruction of cervical optical properties with frequency domain measurement are the description of the tissue boundary, expression of the frequency-domain signal, and development of rapid reconstruction method for clinical use. To get the frequency domain signal in Monte Carlos simulation, discrete Fourier transformation of the photon migration history in time-domain is employed. By combining the perturbation Monte-Carlo simulation and the LM optimization technology, a rapid reconstruction algorithm is constructed, by which only one Monte-Carlo simulation is needed. The reconstruction method is validated by simulation and experiments on solid phantom. Simulation results show that the inaccuracy in reconstruction of absorption coefficient is less than 3% for a certain range of optical properties. The algorithm is also proved to be robust to the initial guess of optical properties and noise. Experimental results showed that the absorption coefficient can be reconstructed with inaccuracy of less than 10%. The absorption coefficient reconstruction for one set of measurement data can be fulfilled within one minute.
Monte-Carlo simulation of defect-cluster nucleation in metals during irradiation
Energy Technology Data Exchange (ETDEWEB)
Nakasuji, Toshiki, E-mail: t-nakasuji@iae.kyoto-u.ac.jp [Graduate School of Energy Science, Kyoto University, Uji, Kyoto 611-0011 (Japan); Morishita, Kazunori [Institute of Advanced Energy, Kyoto University, Uji, Kyoto 611-0011 (Japan); Ruan, Xiaoyong [Graduate School of Energy Science, Kyoto University, Uji, Kyoto 611-0011 (Japan)
2017-02-15
Highlights: • Monte-Carlo simulations were performed to investigate the nucleation process of copper-vacancy clusters in Fe. • Nucleation paths were obtained as a function of temperature and the damage rate. - Abstract: A multiscale modeling approach was applied to investigate the nucleation process of CRPs (copper rich precipitates, i.e., copper-vacancy clusters) in α-Fe containing 1 at.% Cu during irradiation. Monte-Carlo simulations were performed to investigate the nucleation process, with the rate theory equation analysis to evaluate the concentration of displacement defects, along with the molecular dynamics technique to know CRP thermal stabilities in advance. Our MC simulations showed that there is long incubation period at first, followed by a rapid growth of CRPs. The incubation period depends on irradiation conditions such as the damage rate and temperature. CRP’s composition during nucleation varies with time. The copper content of CRPs shows relatively rich at first, and then becomes poorer as the precipitate size increases. A widely-accepted model of CRP nucleation process is finally proposed.
Temperature dependence of magnetic order in Fe/(Ga,Mn)As studied by Monte Carlo simulations
Energy Technology Data Exchange (ETDEWEB)
Polesya, Svitlana; Minar, Jan; Ebert, Hubert [LMU Muenchen, Dept. Chemie und Biochemie/Phys. Chemie, Butenandtstrasse 11, D-81377 Muenchen (Germany); Back, Christian [Institut fuer Experimentelle Physik, Univ. Regensburg (Germany)
2008-07-01
The magnetic order of the heterogeneous interface system (GaMn)As/Fe at finite temperatures has been studied by Monte Carlo simulations. The ground state magnetic properties were determined within ab initio electronic structure calculations using the SPR-TB-KKR Green's function method. All calculations have been performed for the semi-infinite system of (GaMn)As with 5 % Mn covered by a 7 ML Fe film. The temperature dependent properties of this system (with and without external magnetic field) have been studied using MC simulation. The exchange coupling within the Fe and (GaMn)As subsystems were found to be dominantly long-range ferromagnetic whereas the coupling of Fe and Mn moments close to the interface is strongly antiferromagnetic. The Monte Carlo simulations lead to a Curie temperature of about 1000 K for the Fe film. Within the (GaMn)As subsystem due to the polarisation induced by the Fe film the average magnetisation at room temperature is still about 70 % of its T=0 value for several layers close to the interface. These results are in full agreement with recent experimental findings.
Study of magnetic properties for co double-nanorings: Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Ye, Qingying, E-mail: qyye@fjnu.edu.cn [College of Physics and Energy, Fujian Provincial Key Laboratory of Quantum Manipulation and New Energy Materials, Fujian Normal University, Fuzhou, 350007 (China); Chen, Shuiyuan [College of Physics and Energy, Fujian Provincial Key Laboratory of Quantum Manipulation and New Energy Materials, Fujian Normal University, Fuzhou, 350007 (China); Electrical and Computer Engineering, Northeastern University, Boston, 02115 (United States); Liu, Jingyao; Huang, Chao; Huang, Shengkai [College of Physics and Energy, Fujian Provincial Key Laboratory of Quantum Manipulation and New Energy Materials, Fujian Normal University, Fuzhou, 350007 (China); Huang, Zhigao, E-mail: zghuang@fjnu.edu.cn [College of Physics and Energy, Fujian Provincial Key Laboratory of Quantum Manipulation and New Energy Materials, Fujian Normal University, Fuzhou, 350007 (China)
2016-06-15
In this paper, cobalt double-nanorings (Co D-N-rings) structure model was constructed. Based on Monte-Carlo simulation (MC) method combining with Fast Fourier Transformation and Micromagnetism (FFTM) method, the magnetic properties of Co D-N-rings with different geometric dimensions have been studied. The simulated results indicate that, the magnetization steps in hysteresis loops is the result of the special spin configurations (SCs), i.e., onion-type state and vortex-type state, which are very different from that in many other nanostructures, such as nanometer thin-films, nanotubes, etc. Besides, Co D-N-rings with different geometric dimensions present interesting magnetization behavior, which is determined by the change of both SCs and exchange interaction in Co D-N-rings. - Highlights: • A double-nanorings structure (named as D-N-rings) was proposed to construct cobalt nanometer thin film. • Monte Carlo method combining with FFTM method was used to simulate magnetic properties of the Co D-N-rings. • Magnetization dynamic processes of the Co D-N-rings were obtained and interpreted through the evolutionary process of spin configurations. • Geometric dimensions deeply influence the magnetization behavior of the Co D-N-rings, which is determined by the change of both SCs and exchange interaction.
A study on the shielding of lodine 131 using Monte Carlo Simulation
Energy Technology Data Exchange (ETDEWEB)
Jang, Dong Gun; Yang, Seoung Oh; Kim, Jung Ki; Lee, Sang Ho; Choi, Hyung Seok; Bae, Cheol Woo [Dongnam Institute of Radiological and Medical Sciences Cancer Center, Busan (Korea, Republic of)
2014-06-15
This study was designated to investigate the Bremsstrahlung and radiation dose by beta rays. Radiation attenuation from I-131 treatment ward was analyzed using radio protective apron. Shielding materials which is included lead or water were simulated in Monte Carlo Simulation then the spectrum on interaction was analyzed. The shielding materials were categorized according to the thickness. 0.25 mm and 0.5 mm thick lead and 0.1 mm and 0.2 mm thick water shielding materials were configured in Monte Carlo Simulation for this study. Only lead shielding method and water plus lead shielding method were carried. As a results, when 0.5 mm thick lead shielding method was performed, the radiation dose was similar to the results with water plus lead shielding method. In case of using 0.25 mm thick lead shielding, the shielding effect was somewhat less. However, that shielding method cause dose reduction of about 60% compare with non-shielding material.