WorldWideScience

Sample records for 3-d monte-carlo analysis

  1. Investigation of the Power Coefficient of Reactivity of 3D CANDU Reactor through Detailed Monte Carlo Analysis

    The heat is removed by the heavy water coolant completely separated from stationary moderator. Due to the good neutron economy of the CANDU reactor, natural uranium fuel is used without enrichment. Because of the unique core configuration characteristic, there is less resonance absorption of neutron in fuel which leads to a relatively small fuel temperature coefficient (FTC). The value of FTC can even be positive due to the 239Pu buildup during the fuel depletion and also the neutron up-scattering by the oxygen atoms in the fuel. Unlike the pressurized light water reactor, it is well known that CANDU-6 has a positive coolant void reactivity (CVR) and coolant temperature coefficient (CTC). In a traditional reactor analysis, the asymptotic scattering kernel has been used and neglects the thermal motion of nuclides such as U-238. However, it is well accepted that in a scattering reaction, the thermal movement of the target can affect the scattering reaction in the vicinity of scattering resonance and enhance neutron capture by the capture resonance. Some recent works have revealed that the thermal motion of U-238 affects the scattering reaction and that the resulting Doppler broadening of the scattering resonance enhances the FTC of the thermal reactor including PWRs by 10- 15%. In order to observe the impacts of the Doppler broadening of the scattering resonances on the criticality and FTC, a recent investigation was done for a clean and fresh CANDU fuel lattice using Monte Carlo code MCNPX for analysis.. In ref. 3 the so-called DBRC (Doppler Broadened Rejection Correction) method was adopted to consider the thermal movement of U-238. In this study, the safety parameter of CANDU-6 is re-evaluated by using the continuous energy Monte Carlo code SERPENT 2 which uses the DBRC method to simulate the thermal motion of U-238. The analysis is performed for a full 3-D CANDU-6 core and the PCR is evaluated near equilibrium burnup. For a high-fidelity Monte Carlo calculation

  2. Combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation

    Saleur, H.; Derrida, B.

    1985-07-01

    In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents wich confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations.

  3. Transmutation efficiency in the prismatic deep burner HTR concept by a 3D Monte Carlo depletion analysis

    This paper summarizes studies performed on the Deep-Burner Modular Helium Reactor (DB-MHR) concept-design. Feasibility and sensitivity studies as well as fuel-cycle studies with probabilistic methodology are presented. Current investigations on design strategies in one and two pass scenarios, and the computational tools are also presented. Computations on the prismatic concept-design were performed on a full-core 3D model basis. The probabilistic MCNP-MONTEBURNS-ORIGEN chain, with either JEF2.2 or BVI libraries, was used. One or two independently depleting media per assembly were accounted. Due to the calculation time necessary to perform MCNP5 calculations with sufficient accuracy, the different parameters of the depletion calculations have to be optimized according to the desired accuracy of the results. Three strategies were compared: the two pass with driver and transmuter fuel loading in three rings, the one pass with driver fuel only in three rings geometry and finally the one pass in four rings. The 'two pass' scenario is the best deep burner with about 70% mass reduction of actinides for the PWR discharged fuel. However the small difference obtained for incineration (∼5%) raises the question of the interest of this scenario given the difficulty of the process for TF fuel. Finally the advantage of the 'two pass' scenario is mainly the reduction of actinide activity. (author)

  4. Continuous-energy Monte Carlo methods for calculating generalized response sensitivities using TSUNAMI-3D

    This work introduces a new approach for calculating the sensitivity of generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The GEneralized Adjoint Responses in Monte Carlo (GEAR-MC) method has enabled the calculation of high resolution sensitivity coefficients for multiple, generalized neutronic responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here and proof of principle is demonstrated by calculating sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications. (author)

  5. 3D Monte Carlo radiation transfer modelling of photodynamic therapy

    Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry

    2015-06-01

    The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.

  6. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  7. A combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation

    Saleur, H.; Derrida, B.

    1985-01-01

    In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents which confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations.

  8. A combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation

    In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents wich confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations

  9. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4 to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4 in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23.000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selected (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. This benchmark shows 2 main points. First, independent replicas are an appropriate method to achieve a fare variance estimation when dominance ratio is near 1. Secondly, the diffusion operator with 2 energy groups gives satisfactory results compared to TRIPOLI-4 even with a highly heterogeneous neutron flux map and an harder spectrum

  10. Feasibility and value of fully 3D Monte Carlo reconstruction in single photon emission computed tomography

    The accuracy of Single Photon Emission Computed Tomography (SPECT) images is degraded by physical effects, namely photon attenuation, Compton scatter and spatially varying collimator response. The 3D nature of these effects is usually neglected by the methods used to correct for these effects. To deal with the 3D nature of the problem, a 3D projector modeling the spread of photons in 3D can be used in iterative tomographic reconstruction. The 3D projector can be estimated analytically with some approximations, or using precise Monte Carlo simulations. This latter approach has not been applied to fully 3D reconstruction yet due to impractical storage and computation time. The goal of this paper was to determine the gain to be expected from fully 3D Monte Carlo (F3DMC) modeling of the projector in iterative reconstruction, compared to conventional 2D and 3D reconstruction methods. As a proof-of-concept, two small datasets were considered. The projections of the two phantoms were simulated using the Monte Carlo simulation code GATE, as well as the corresponding projector, by taking into account all physical effects (attenuation, scatter, camera point spread function) affecting the imaging process. F3DMC was implemented by using this 3D projector in a maximum likelihood expectation maximization (MLEM) iterative reconstruction. To assess the value of F3DMC, data were reconstructed using 4 methods: filtered backprojection (FBP), MLEM without attenuation correction (MLEM), MLEM with attenuation correction, Jaszczak scatter correction and 3D correction for depth-dependent spatial resolution using an analytical model (MLEMC) and F3DMC. Our results suggest that F3DMC improves mainly imaging sensitivity and signal-to-noise ratio (SNR): sensitivity is multiplied by about 103 and SNR is increased by 20 to 70% compared to MLEMC. Computation of a more robust projector and application of the method on more realistic datasets are currently under investigation. (authors)

  11. Variance reduction in Monte Carlo analysis of rarefied gas diffusion.

    Perlmutter, M.

    1972-01-01

    The problem of rarefied diffusion between parallel walls is solved using the Monte Carlo method. The diffusing molecules are evaporated or emitted from one of the two parallel walls and diffuse through another molecular species. The Monte Carlo analysis treats the diffusing molecule as undergoing a Markov random walk, and the local macroscopic properties are found as the expected value of the random variable, the random walk payoff. By biasing the transition probabilities and changing the collision payoffs, the expected Markov walk payoff is retained but its variance is reduced so that the Monte Carlo result has a much smaller error.

  12. Bayesian phylogeny analysis via stochastic approximation Monte Carlo

    Cheon, Sooyoung

    2009-11-01

    Monte Carlo methods have received much attention in the recent literature of phylogeny analysis. However, the conventional Markov chain Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, tend to get trapped in a local mode in simulating from the posterior distribution of phylogenetic trees, rendering the inference ineffective. In this paper, we apply an advanced Monte Carlo algorithm, the stochastic approximation Monte Carlo algorithm, to Bayesian phylogeny analysis. Our method is compared with two popular Bayesian phylogeny software, BAMBE and MrBayes, on simulated and real datasets. The numerical results indicate that our method outperforms BAMBE and MrBayes. Among the three methods, SAMC produces the consensus trees which have the highest similarity to the true trees, and the model parameter estimates which have the smallest mean square errors, but costs the least CPU time. © 2009 Elsevier Inc. All rights reserved.

  13. MCMG: a 3-D multigroup P3 Monte Carlo code and its benchmarks

    In this paper a 3-D Monte Carlo multigroup neutron transport code MCMG has been developed from a coupled neutron and photon transport Monte Carlo code MCNP. The continuous-energy cross section library of the MCNP code is replaced by the multigroup cross section data generated by the transport lattice code, such as the WIMS code. It maintains the strong abilities of MCNP for geometry treatment, counting, variance reduction techniques and plotting. The multigroup neutron scattering cross sections adopt the Pn (n ≤ 3) approximation. The test results are in good agreement with the results of other methods and experiments. The number of energy groups can be varied from few groups to multigroup, and either macroscopic or microscopic cross section can be used. (author)

  14. Implementation of 3D Lattice Monte Carlo Simulation on a Cluster of Symmetric Multiprocessors

    雷咏梅; 蒋英; 等

    2002-01-01

    This paper presents a new approach to parallelize 3D lattice Monte Carlo algorithms used in the numerical simulation of polymer on ZiQiang 2000-a cluster of symmetric multiprocessors(SMPs).The combined load for cell and energy calculations over the time step is balanced together to form a single spatial decomposition.Basic aspects and strategies of running Monte Carlo calculations on parallel computers are studied.Different steps involved in porting the software on a parallel architecture based on ZiQiang 2000 running under Linux and MPI are described briefly.It is found that parallelization becomes more advantageous when either the lattice is very large or the model contains many cells and chains.

  15. Adaptive Multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    Navarro, C A; Deng, Youjin

    2015-01-01

    The study of disordered spin systems through Monte Carlo simulations has proven to be a hard task due to the adverse energy landscape present at the low temperature regime, making it difficult for the simulation to escape from a local minimum. Replica based algorithms such as the Exchange Monte Carlo (also known as parallel tempering) are effective at overcoming this problem, reaching equilibrium on disordered spin systems such as the Spin Glass or Random Field models, by exchanging information between replicas of neighbor temperatures. In this work we present a multi-GPU Exchange Monte Carlo method designed for the simulation of the 3D Random Field Model. The implementation is based on a two-level parallelization scheme that allows the method to scale its performance in the presence of faster and GPUs as well as multiple GPUs. In addition, we modified the original algorithm by adapting the set of temperatures according to the exchange rate observed from short trial runs, leading to an increased exchange rate...

  16. ORPHEE research reactor: 3D core depletion calculation using Monte-Carlo code TRIPOLI-4®

    Damian, F.; Brun, E.

    2014-06-01

    ORPHEE is a research reactor located at CEA Saclay. It aims at producing neutron beams for experiments. This is a pool-type reactor (heavy water), and the core is cooled by light water. Its thermal power is 14 MW. ORPHEE core is 90 cm height and has a cross section of 27x27 cm2. It is loaded with eight fuel assemblies characterized by a various number of fuel plates. The fuel plate is composed of aluminium and High Enriched Uranium (HEU). It is a once through core with a fuel cycle length of approximately 100 Equivalent Full Power Days (EFPD) and with a maximum burnup of 40%. Various analyses under progress at CEA concern the determination of the core neutronic parameters during irradiation. Taking into consideration the geometrical complexity of the core and the quasi absence of thermal feedback for nominal operation, the 3D core depletion calculations are performed using the Monte-Carlo code TRIPOLI-4® [1,2,3]. A preliminary validation of the depletion calculation was performed on a 2D core configuration by comparison with the deterministic transport code APOLLO2 [4]. The analysis showed the reliability of TRIPOLI-4® to calculate a complex core configuration using a large number of depleting regions with a high level of confidence.

  17. Development of 3d reactor burnup code based on Monte Carlo method and exponential Euler method

    Burnup analysis plays a key role in fuel breeding, transmutation and post-processing in nuclear reactor. Burnup codes based on one-dimensional and two-dimensional transport method have difficulties in meeting the accuracy requirements. A three-dimensional burnup analysis code based on Monte Carlo method and Exponential Euler method has been developed. The coupling code combines advantage of Monte Carlo method in complex geometry neutron transport calculation and FISPACT in fast and precise inventory calculation, meanwhile resonance Self-shielding effect in inventory calculation can also be considered. The IAEA benchmark text problem has been adopted for code validation. Good agreements were shown in the comparison with other participants' results. (authors)

  18. Monte Carlo methods for the reliability analysis of Markov systems

    This paper presents Monte Carlo methods for the reliability analysis of Markov systems. Markov models are useful in treating dependencies between components. The present paper shows how the adjoint Monte Carlo method for the continuous time Markov process can be derived from the method for the discrete-time Markov process by a limiting process. The straightforward extensions to the treatment of mean unavailability (over a time interval) are given. System unavailabilities can also be estimated; this is done by making the system failed states absorbing, and not permitting repair from them. A forward Monte Carlo method is presented in which the weighting functions are related to the adjoint function. In particular, if the exact adjoint function is known then weighting factors can be constructed such that the exact answer can be obtained with a single Monte Carlo trial. Of course, if the exact adjoint function is known, there is no need to perform the Monte Carlo calculation. However, the formulation is useful since it gives insight into choices of the weight factors which will reduce the variance of the estimator

  19. Monte Carlo methods for direct calculation of 3D dose distributions for photon fields in radiotherapy

    Even with state of the art treatment planning systems the photon dose calculation can be erroneous under certain circumstances. In these cases Monte Carlo methods promise a higher accuracy. We have used the photon transport code CHILD of the GSF-Forschungszentrum, which was developed to calculate dose in diagnostic radiation protection matters. The code was refined for application in radiotherapy for high energy photon irradiation and should serve for dose verification in individual cases. The irradiation phantom can be entered as any desired 3D matrix or be generated automatically from an individual CT database. The particle transport takes into account pair production, photo, and Compton effect with certain approximations. Efficiency is increased by the method of 'fractional photons'. The generated secondary electrons are followed by the unscattered continuous-slowing-down-approximation (CSDA). The developed Monte Carlo code Monaco Matrix was tested with simple homogeneous and heterogeneous phantoms through comparisons with simulations of the well known but slower EGS4 code. The use of a point source with a direction independent energy spectrum as simplest model of the radiation field from the accelerator head is shown to be sufficient for simulation of actual accelerator depth dose curves. Good agreement (<2%) was found for depth dose curves in water and in bone. With complex test phantoms and comparisons with EGS4 calculated dose profiles some drawbacks in the code were found. Thus, the implementation of the electron multiple-scattering should lead us to step by step improvement of the algorithm. (orig.)

  20. Monte Carlo Radiation Analysis of a Spacecraft Radioisotope Power System

    Wallace, M.

    1994-01-01

    A Monte Carlo statistical computer analysis was used to create neutron and photon radiation predictions for the General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS RTG). The GPHS RTG is being used on several NASA planetary missions. Analytical results were validated using measured health physics data.

  1. Discretized mesh tools and related treatment for hybrid transport application with 3d discrete ordinates and Monte Carlo

    Hybrid methods of neutron transport have increased greatly in use, for example, in applications of using both Monte Carlo and deterministic transport methods to calculate quantities of interest, such as the flux and eigenvalue in a nuclear reactor. Many 3d parallel Sn codes apply a Cartesian mesh, and thus for nuclear reactors the representation of curved fuels (cylinder, sphere, etc.) are impacted in the representation of proper fuel inventory, resulting in both a deviation of mass and exact geometry in the computer model representation. In addition, we discuss auto-conversion techniques with our 3d Cartesian mesh generation tools to allow for full generation of MCNP5 inputs (Cartesian mesh and Multigroup XS) from a basis PENTRAN Sn model. For a PWR assembly eigenvalue problem, we explore the errors associated with this Cartesian discrete mesh representation, and perform an analysis to calculate a slope parameter that relates the pcm to the percent areal/volumetric deviation (areal → 2d problems, volumetric → 3d problems). This paper analysis demonstrates a linear relationship between pcm change and areal/volumetric deviation using Multigroup MCNP on a PWR assembly compared to a reference exact combinatorial MCNP geometry calculation. For the same MCNP multigroup problems, we also characterize this linear relationship in discrete ordinates (3d PENTRAN). Finally, for 3D Sn models, we show an application of corner fractioning, a volume-weighted recovery of underrepresented target fuel mass that reduced pcm error to < 100, compared to reference Monte Carlo, in the application to a PWR assembly. (author)

  2. APPLICATION OF BAYESIAN MONTE CARLO ANALYSIS TO A LAGRANGIAN PHOTOCHEMICAL AIR QUALITY MODEL. (R824792)

    Uncertainties in ozone concentrations predicted with a Lagrangian photochemical air quality model have been estimated using Bayesian Monte Carlo (BMC) analysis. Bayesian Monte Carlo analysis provides a means of combining subjective "prior" uncertainty estimates developed ...

  3. OptogenSIM: a 3D Monte Carlo simulation platform for light delivery design in optogenetics.

    Liu, Yuming; Jacques, Steven L; Azimipour, Mehdi; Rogers, Jeremy D; Pashaie, Ramin; Eliceiri, Kevin W

    2015-12-01

    Optimizing light delivery for optogenetics is critical in order to accurately stimulate the neurons of interest while reducing nonspecific effects such as tissue heating or photodamage. Light distribution is typically predicted using the assumption of tissue homogeneity, which oversimplifies light transport in heterogeneous brain. Here, we present an open-source 3D simulation platform, OptogenSIM, which eliminates this assumption. This platform integrates a voxel-based 3D Monte Carlo model, generic optical property models of brain tissues, and a well-defined 3D mouse brain tissue atlas. The application of this platform in brain data models demonstrates that brain heterogeneity has moderate to significant impact depending on application conditions. Estimated light density contours can show the region of any specified power density in the 3D brain space and thus can help optimize the light delivery settings, such as the optical fiber position, fiber diameter, fiber numerical aperture, light wavelength and power. OptogenSIM is freely available and can be easily adapted to incorporate additional brain atlases. PMID:26713200

  4. The impact of Monte Carlo simulation. A scientometric analysis of scholarly literature

    A scientometric analysis of Monte Carlo simulation and Monte Carlo codes has been performed over a set of representative scholarly journals related to radiation physics. The results of this study are reported and discussed. They document and quantitatively appraise the role of Monte Carlo methods and codes in scientific research and engineering applications. (author)

  5. Benchmark for a 3D Monte Carlo boiling water reactor fluence computational package - MF3D

    A detailed three dimensional model of a quadrant of an operating BWR has been developed using MCNP to calculate flux spectrum and fluence levels at various locations in the reactor system. The calculational package, MF3D, was benchmarked against test data obtained over a complete fuel cycle of the host BWR. The test package included activation wires sensitive in both the fast and thermal ranges. Comparisons between the calculational results and test data are good to within ten percent, making the MF3D package an accurate tool for neutron and gamma fluence computation in BWR pressure vessel internals. (orig.)

  6. Monte carlo analysis of multicolour LED light engine

    Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen;

    2015-01-01

    A new Monte Carlo simulation as a tool for analysing colour feedback systems is presented here to analyse the colour uncertainties and achievable stability in a multicolour dynamic LED system. The Monte Carlo analysis presented here is based on an experimental investigation of a multicolour LED...... light engine designed for white tuneable studio lighting. The measured sensitivities to the various factors influencing the colour uncertainty for similar system are incorporated. The method aims to provide uncertainties in the achievable chromaticity coordinates as output over the tuneable range, e.......g. expressed in correlated colour temperature (CCT) and chromaticity distance from Planckian locus (Duv), and colour rendering indices (CRIs) for that dynamic system. Data for the uncertainty in chromaticity is analysed in the u', v' (Uniform Chromaticity Scale Diagram) for light output by comparing the...

  7. IMPROVEMENT OF 3D MONTE CARLO LOCALIZATION USING A DEPTH CAMERA AND TERRESTRIAL LASER SCANNER

    S. Kanai

    2015-05-01

    Full Text Available Effective and accurate localization method in three-dimensional indoor environments is a key requirement for indoor navigation and lifelong robotic assistance. So far, Monte Carlo Localization (MCL has given one of the promising solutions for the indoor localization methods. Previous work of MCL has been mostly limited to 2D motion estimation in a planar map, and a few 3D MCL approaches have been recently proposed. However, their localization accuracy and efficiency still remain at an unsatisfactory level (a few hundreds millimetre error at up to a few FPS or is not fully verified with the precise ground truth. Therefore, the purpose of this study is to improve an accuracy and efficiency of 6DOF motion estimation in 3D MCL for indoor localization. Firstly, a terrestrial laser scanner is used for creating a precise 3D mesh model as an environment map, and a professional-level depth camera is installed as an outer sensor. GPU scene simulation is also introduced to upgrade the speed of prediction phase in MCL. Moreover, for further improvement, GPGPU programming is implemented to realize further speed up of the likelihood estimation phase, and anisotropic particle propagation is introduced into MCL based on the observations from an inertia sensor. Improvements in the localization accuracy and efficiency are verified by the comparison with a previous MCL method. As a result, it was confirmed that GPGPU-based algorithm was effective in increasing the computational efficiency to 10-50 FPS when the number of particles remain below a few hundreds. On the other hand, inertia sensor-based algorithm reduced the localization error to a median of 47mm even with less number of particles. The results showed that our proposed 3D MCL method outperforms the previous one in accuracy and efficiency.

  8. Spectral history model in DYN3D: Verification against coupled Monte-Carlo thermal-hydraulic code BGCore

    Highlights: • Pu-239 based spectral history method was tested on 3D BWR single assembly case. • Burnup of a BWR fuel assembly was performed with the nodal code DYN3D. • Reference solution was obtained by coupled Monte-Carlo thermal-hydraulic code BGCore. • The proposed method accurately reproduces moderator density history effect for BWR test case. - Abstract: This research focuses on the verification of a recently developed methodology accounting for spectral history effects in 3D full core nodal simulations. The traditional deterministic core simulation procedure includes two stages: (1) generation of homogenized macroscopic cross section sets and (2) application of these sets to obtain a full 3D core solution with nodal codes. The standard approach adopts the branch methodology in which the branches represent all expected combinations of operational conditions as a function of burnup (main branch). The main branch is produced for constant, usually averaged, operating conditions (e.g. coolant density). As a result, the spectral history effects that associated with coolant density variation are not taken into account properly. Number of methods to solve this problem (such as micro-depletion and spectral indexes) were developed and implemented in modern nodal codes. Recently, we proposed a new and robust method to account for history effects. The methodology was implemented in DYN3D and involves modification of the few-group cross section sets. The method utilizes the local Pu-239 concentration as an indicator of spectral history. The method was verified for PWR and VVER applications. However, the spectrum variation in BWR core is more pronounced due to the stronger coolant density change. The purpose of the current work is investigating the applicability of the method to BWR analysis. The proposed methodology was verified against recently developed BGCore system, which couples Monte Carlo neutron transport with depletion and thermal-hydraulic solvers and

  9. Use of Serpent Monte-Carlo code for development of 3D full-core models of Gen-IV fast spectrum reactors and preparation of safety parameters/cross-section data for transient analysis with FAST code system

    Current work presents a new methodology which uses Serpent Monte-Carlo (MC) code for generating multi-group beginning-of-life (BOL) cross section (XS) database file that is compatible with PARCS 3D reactor core simulator and allows simulation of transients with the FAST code system. The applicability of the methodology was tested on European Sodium-cooled Fast Reactor (ESFR) design with an oxide fuel proposed by CEA (France). The k-effective, power peaking factors and safety parameters (such as Doppler constant, coolant density coefficient, fuel axial expansion coefficient, diagrid expansion coefficients and control rod worth) calculated by PARCS/TRACE were compared with the results of the Serpent MC code. The comparison indicates overall reasonable agreement between conceptually different (deterministic and stochastic) codes. The new development makes it in principle possible to use the Serpent MC code for cross section generation for the PARCS code to perform transient analyses for fast reactors. The advantages and limitations of this methodology are discussed in the paper. (author)

  10. Asymptotic analysis of spatial discretizations in implicit Monte Carlo

    Densmore, Jeffery D [Los Alamos National Laboratory

    2009-01-01

    We perform an asymptotic analysis of spatial discretizations in Implicit Monte Carlo (IMC). We consider two asymptotic scalings: one that represents a time step that resolves the mean-free time, and one that corresponds to a fixed, optically large time step. We show that only the latter scaling results in a valid spatial discretization of the proper diffusion equation, and thus we conclude that IMC only yields accurate solutions when using optically large spatial cells if time steps are also optically large. We demonstrate the validity of our analysis with a set of numerical examples.

  11. Asymptotic analysis of spatial discretizations in implicit Monte Carlo

    Densmore, Jeffery D [Los Alamos National Laboratory

    2008-01-01

    We perform an asymptotic analysis of spatial discretizations in Implicit Monte Carlo (IMC). We consider two asymptotic scalings: one that represents a time step that resolves the mean-free time, and one that corresponds to a fixed, optically large time step. We show that only the latter scaling results in a valid spatial discretization of the proper diffusion equation, and thus we conclude that IMC only yields accurate solutions when using optically large spatial cells if time steps are also optically large, We demonstrate the validity of our analysis with a set of numerical examples.

  12. Conceptual detector development and Monte Carlo simulation of a novel 3D breast computed tomography system

    Ziegle, Jens; Müller, Bernhard H.; Neumann, Bernd; Hoeschen, Christoph

    2016-03-01

    A new 3D breast computed tomography (CT) system is under development enabling imaging of microcalcifications in a fully uncompressed breast including posterior chest wall tissue. The system setup uses a steered electron beam impinging on small tungsten targets surrounding the breast to emit X-rays. A realization of the corresponding detector concept is presented in this work and it is modeled through Monte Carlo simulations in order to quantify first characteristics of transmission and secondary photons. The modeled system comprises a vertical alignment of linear detectors hold by a case that also hosts the breast. Detectors are separated by gaps to allow the passage of X-rays towards the breast volume. The detectors located directly on the opposite side of the gaps detect incident X-rays. Mechanically moving parts in an imaging system increase the duration of image acquisition and thus can cause motion artifacts. So, a major advantage of the presented system design is the combination of the fixed detectors and the fast steering electron beam which enable a greatly reduced scan time. Thereby potential motion artifacts are reduced so that the visualization of small structures such as microcalcifications is improved. The result of the simulation of a single projection shows high attenuation by parts of the detector electronics causing low count levels at the opposing detectors which would require a flat field correction, but it also shows a secondary to transmission ratio of all counted X-rays of less than 1 percent. Additionally, a single slice with details of various sizes was reconstructed using filtered backprojection. The smallest detail which was still visible in the reconstructed image has a size of 0.2mm.

  13. Feasibility and value of fully 3D Monte Carlo reconstruction in single-photon emission computed tomography

    The accuracy of Single-Photon Emission Computed Tomography images is degraded by physical effects, namely photon attenuation, Compton scatter and spatially varying collimator response. The 3D nature of these effects is usually neglected by the methods used to correct for these effects. To deal with the 3D nature of the problem, a 3D projector modeling the spread of photons in 3D can be used in iterative tomographic reconstruction. The 3D projector can be estimated analytically with some approximations, or using precise Monte Carlo simulations. This latter approach has not been applied to fully 3D reconstruction yet due to impractical storage and computation time. The goal of this paper was to determine the gain to be expected from fully 3D Monte Carlo (F3DMC) modeling of the projector in iterative reconstruction, compared to conventional 2D and 3D reconstruction methods. As a proof-of-concept, two small datasets were considered. The projections of the two phantoms were simulated using the Monte Carlo simulation code GATE, as well as the corresponding projector, by taking into account all physical effects (attenuation, scatter, camera point spread function) affecting the imaging process. F3DMC was implemented by using this 3D projector in a maximum likelihood expectation maximization (MLEM) iterative reconstruction. To assess the value of F3DMC, data were reconstructed using four methods: filtered backprojection, MLEM without attenuation correction (MLEM), MLEM with attenuation correction, Jaszczak scatter correction and 3D correction for depth-dependent spatial resolution using an analytical model (MLEMC) and F3DMC. Our results suggest that F3DMC improves mainly imaging sensitivity and signal-to-noise ratio (SNR): sensitivity is multiplied by about 103 and SNR is increased by 20-70% compared to MLEMC. Computation of a more robust projector and application of the method on more realistic datasets are currently under investigation

  14. Review of neutron noise analysis theory by Monte Carlo simulation

    Some debates on the theory of neutron noise analysis for reactor kinetic parameter measurement were found before 1970 but a report firmly clearing these debates has not been found, and a question was raised when neutron noise experiments for the TRIGA and HANARO reactors in Korea were performed. In order to clarify this question, the neutron noise experiment is simulated by the Monte Carlo method. This simulation confirms that the widely used equation is approximately valid and that the confusion was caused from the explanation on the derivation of the equation. Rossi-α technique is one of the representative methods of noise analyses for the reactor kinetic parameter measurement, but different opinions were raised for the chain reaction related term in the equation. The equation originally derived at the Los Alamos National Laboratory (LANL) has been widely accepted. However, the others were supported by strict mathematics and experiments as well, and the reason of discrepancy has not been clarified. Since it is the problem of basic concept before the effect of neutron energy or geometry is included, the Monte Carlo simulation for the simplest reactor model could clarify it. For this purpose, the experiment measuring the neutron noise is simulated, and it results that the original equation is approximately valid. However, it is judged that the explanation on the equation by the authors derived it for the first time is not so correct, but Orndoff who made the first experiment by the Ross-α technique explained it rather correctly

  15. Monte Carlo analysis of Musashi TRIGA mark II reactor core

    Matsumoto, Tetsuo [Atomic Energy Research Laboratory, Musashi Institute of Technology, Kawasaki, Kanagawa (Japan)

    1999-08-01

    The analysis of the TRIGA-II core at the Musashi Institute of Technology Research Reactor (Musashi reactor, 100 kW) was performed by the three-dimensional continuous-energy Monte Carlo code (MCNP4A). Effective multiplication factors (k{sub eff}) for the several fuel-loading patterns including the initial core criticality experiment, the fuel element and control rod reactivity worth as well as the neutron flux measurements were used in the validation process of the physical model and neutron cross section data from the ENDF/B-V evaluation. The calculated k{sub eff} overestimated the experimental data by about 1.0%{delta}k/k for both the initial core and the several fuel-loading arrangements. The calculated reactivity worths of control rod and fuel element agree well the measured ones within the uncertainties. The comparison of neutron flux distribution was consistent with the experimental ones which were measured by activation methods at the sample irradiation tubes. All in all, the agreement between the MCNP predictions and the experimentally determined values is good, which indicated that the Monte Carlo model is enough to simulate the Musashi TRIGA-II reactor core. (author)

  16. Stratified source-sampling techniques for Monte Carlo eigenvalue analysis

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results

  17. Simulations with the Hybrid Monte Carlo algorithm: implementation and data analysis

    Schaefer, Stefan

    2011-01-01

    This tutorial gives a practical introduction to the Hybrid Monte Carlo algorithm and the analysis of Monte Carlo data. The method is exemplified at the ϕ 4 theory, for which all steps from the derivation of the relevant formulae to the actual implementation in a computer program are discussed in detail. It concludes with the analysis of Monte Carlo data, in particular their auto-correlations.

  18. Iterative Monte Carlo analysis of spin-dependent parton distributions

    Sato, Nobuo; Melnitchouk, W.; Kuhn, S. E.; Ethier, J. J.; Accardi, A.; Jefferson Lab Angular Momentum Collaboration

    2016-04-01

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳0.1 . The study also provides the first determination of the flavor-separated twist-3 PDFs and the d2 moment of the nucleon within a global PDF analysis.

  19. Iterative Monte Carlo analysis of spin-dependent parton distributions

    Sato, Nobuo; Kuhn, S E; Ethier, J J; Accardi, A

    2016-01-01

    We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at $x \\gtrsim 0.1$. The study also provides the first determination of the flavor-separated twist-3 PDFs and the $d_2$ moment of the nucleon within a global PDF analysis.

  20. Monte Carlo uncertainty analysis for an iron shielding benchmark experiment

    Fischer, U.; Tsige-Tamirat, H. [Association Euratom-FZK Forschungszentrum Karlsruhe (Germany); Perel, R.L. [Hebrew Univ., Jerusalem (Israel); Wu, Y. [Institute of Plasma Physics, Heifi (China)

    1998-07-01

    This work is devoted to the computational uncertainty analysis of an iron benchmark experiment having been performed previously at the Technical University of Dresden (TUD). The analysis is based on the use of a novel Monte Carlo approach for calculating sensitivities of point detectors and focuses on the new {sup 56}Fe evaluation of the European Fusion File EFF-3. The calculated uncertainties of the neutron leakage fluxes are shown to be significantly smaller than with previous data. Above 5 MeV the calculated uncertainties are larger than the experimental ones. As the measured neutron leakage fluxes are underestimated by about 10 - 20 % in that energy range, it is concluded that the {sup 56}Fe cross-section data have to be further improved. (authors)

  1. Dose prediction and process optimization in a gamma sterilization facility using 3-D Monte Carlo code

    A model of a gamma sterilizer was built using the ITS/ACCEPT Monte Carlo code and verified through dosimetry. Individual dosimetry measurements in homogeneous material were pooled to represent larger bodies that could be simulated in a reasonable time. With the assumptions and simplifications described, dose predictions were within 2-5% of dosimetry. The model was used to simulate product movement through the sterilizer and to predict information useful for process optimization and facility design

  2. On Monte Carlo Simulation and Analysis of Electricity Markets

    This dissertation is about how Monte Carlo simulation can be used to analyse electricity markets. There are a wide range of applications for simulation; for example, players in the electricity market can use simulation to decide whether or not an investment can be expected to be profitable, and authorities can by means of simulation find out which consequences a certain market design can be expected to have on electricity prices, environmental impact, etc. In the first part of the dissertation, the focus is which electricity market models are suitable for Monte Carlo simulation. The starting point is a definition of an ideal electricity market. Such an electricity market is partly practical from a mathematical point of view (it is simple to formulate and does not require too complex calculations) and partly it is a representation of the best possible resource utilisation. The definition of the ideal electricity market is followed by analysis how the reality differs from the ideal model, what consequences the differences have on the rules of the electricity market and the strategies of the players, as well as how non-ideal properties can be included in a mathematical model. Particularly, questions about environmental impact, forecast uncertainty and grid costs are studied. The second part of the dissertation treats the Monte Carlo technique itself. To reduce the number of samples necessary to obtain accurate results, variance reduction techniques can be used. Here, six different variance reduction techniques are studied and possible applications are pointed out. The conclusions of these studies are turned into a method for efficient simulation of basic electricity markets. The method is applied to some test systems and the results show that the chosen variance reduction techniques can produce equal or better results using 99% fewer samples compared to when the same system is simulated without any variance reduction technique. More complex electricity market models

  3. Neutronic analysis of the PULSTAR reactor using Monte Carlo simulations

    Neutronic analysis of the PULSTAR nuclear reactor was performed in support of its utilization and power upgrade from 1-MWth to 2-MWth. The PULSTAR is an open pool research reactor that is currently fueled with UO2 enriched to 4% in U-235. Detailed models were constructed of its core using the MCNP6 Monte Carlo code and its standard nuclear data libraries. The models covered all eight variations of the core starting with the first critical core in 1972 to the current core that was configured in 2011. Three dimensional heterogeneous models were constructed that faithfully reflected the geometry of the core and its surroundings using the original as-built engineering drawings. The Monte Carlo simulations benefited extensively from measurements that were performed upon the loading of each core and its subsequent operation. This includes power distribution and peaking measurements, depletion measurements (reflecting a core's excess reactivity), and measurements of reactivity feedback coefficients. Furthermore, to support the PULSTAR's fuel needs, the simulations explored the utilization of locally existing inventory of fresh UO2 fuel that is enriched to 6% in U-235. The analysis shows reasonable agreement between the results of the MCNP6 simulations and the available measured data. In general, most discrepancies between simulations and measurements may be attributed to the limited knowledge of the exact conditions of the historical measurements and the procedures used to analyze the measured data. Nonetheless, the results indicate the ability of the constructed models to support safety analysis and licensing action in relation to the on-going upgrades of the PULSTAR reactor. (author)

  4. Implementation and analysis of an adaptive multilevel Monte Carlo algorithm

    Hoel, Hakon

    2014-01-01

    We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic dierential equations (SDE). The work [11] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from O(TOL-3) to O(TOL-2 log(TOL-1)2) for a mean square error of O(TOL2). Later, the work [17] presented an MLMC method using a hierarchy of adaptively re ned, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretizationMLMC method. This work improves the adaptiveMLMC algorithms presented in [17] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diusion, where the complexity of a uniform single level method is O(TOL-4). For both these cases the results con rm the theory, exhibiting savings in the computational cost for achieving the accuracy O(TOL) from O(TOL-3) for the adaptive single level algorithm to essentially O(TOL-2 log(TOL-1)2) for the adaptive MLMC algorithm. © 2014 by Walter de Gruyter Berlin/Boston 2014.

  5. Status of vectorized Monte Carlo for particle transport analysis

    The conventional particle transport Monte Carlo algorithm is ill suited for modern vector supercomputers because the random nature of the particle transport process in the history based algorithm inhibits construction of vectors. An alternative, event-based algorithm is suitable for vectorization and has been used recently to achieve impressive gains in performance on vector supercomputers. This review describes the event-based algorithm and several variations of it. Implementations of this algorithm for applications in particle transport are described, and their relative merits are discussed. The implementation of Monte Carlo methods on multiple vector parallel processors is considered, as is the potential of massively parallel processors for Monte Carlo particle transport simulations

  6. Full 3D Monte Carlo simulation of pit-type defect evolution during extreme ultraviolet lithography multilayer deposition

    To model key aspects of surface morphology evolution and to overcome one of the main barriers to the implementation of extreme ultraviolet lithography in semiconductor processing, the 3D Monte Carlo simulation of ion-beam deposition on pit-type defects was performed. Typical pit defects have depths in the 5–20 nm range and are about 10 times that wide. The aspect ratio of a defect cross section defined as depth divided by the full width at half maximum was used to measure the defect profile (decoration) as a function of film thickness. Previous attempts to model this system used 2D level set methods; 3D calculations using these methods were found to be too computationally intensive. In an effort to model the system in 3D the simulation of this study used the Solid-on-Solid aggregation model to deposit particles onto initial substrate defects. Surface diffusion was then simulated to relax the defect. Aspect ratio decay data was collected from the simulated defects and analyzed. The model was validated for defect evolution by comparing simulations to the experimental scanning transmission electron microscopy data. The statistics of effective activation energy were considered to show that observed defects have important geometric differences which define a unique aspect ratio decay path. Close fitting to the observed case was utilized to validate Monte Carlo physical models of thin film growth for use in predicting the multilayer profile of pit-type defects. - Highlights: • Model pit-type defects in multilayers using Monte Carlo methods. • Simulation substrates derived from Atomic Force Microscopy (AFM) scan defects • AFM scanned defect simulations return close fitting to the physical observations • Activation energy statistics on the surface show unique aspect ratio decay paths. • A test using of the fitting case applied to a different situation works accurately

  7. 3-D Monte Carlo neutron-photon transport code JMCT and its algorithms

    JMCT Monte Carlo neutron and photon transport code has been developed which is based on the JCOGIN toolbox. JCOGIN includes the geometry operation, tally, the domain decomposition and the parallel computation about particle (MPI) and spatial domain (OpenMP) etc. The viewdata of CAD is equipped in JMCT preprocessor. The full-core pin-mode, which is from Chinese Qinshan-II nuclear power station, is design and simulated by JMCT. The detail pin-power distribution and keff results are shown in this paper. (author)

  8. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files

  9. TART98 a coupled neutron-photon 3-D, combinatorial geometry time dependent Monte Carlo Transport code

    Cullen, D E

    1998-11-22

    TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.

  10. The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data

    Ilic, Radovan D [Laboratory of Physics (010), Vinca Institute of Nuclear Sciences, PO Box 522, 11001 Belgrade (Serbia and Montenegro); Spasic-Jokic, Vesna [Laboratory of Physics (010), Vinca Institute of Nuclear Sciences, PO Box 522, 11001 Belgrade (Serbia and Montenegro); Belicev, Petar [Laboratory of Physics (010), Vinca Institute of Nuclear Sciences, PO Box 522, 11001 Belgrade (Serbia and Montenegro); Dragovic, Milos [Center for Nuclear Medicine MEDICA NUCLEARE, Bulevar Despota Stefana 69, 11000 Belgrade (Serbia and Montenegro)

    2005-03-07

    This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour.

  11. The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data

    Ilic, Radovan D.; Spasic-Jokic, Vesna; Belicev, Petar; Dragovic, Milos

    2005-03-01

    This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour.

  12. The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data

    This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour

  13. LISA data analysis using Markov chain Monte Carlo methods

    The Laser Interferometer Space Antenna (LISA) is expected to simultaneously detect many thousands of low-frequency gravitational wave signals. This presents a data analysis challenge that is very different to the one encountered in ground based gravitational wave astronomy. LISA data analysis requires the identification of individual signals from a data stream containing an unknown number of overlapping signals. Because of the signal overlaps, a global fit to all the signals has to be performed in order to avoid biasing the solution. However, performing such a global fit requires the exploration of an enormous parameter space with a dimension upwards of 50 000. Markov Chain Monte Carlo (MCMC) methods offer a very promising solution to the LISA data analysis problem. MCMC algorithms are able to efficiently explore large parameter spaces, simultaneously providing parameter estimates, error analysis, and even model selection. Here we present the first application of MCMC methods to simulated LISA data and demonstrate the great potential of the MCMC approach. Our implementation uses a generalized F-statistic to evaluate the likelihoods, and simulated annealing to speed convergence of the Markov chains. As a final step we supercool the chains to extract maximum likelihood estimates, and estimates of the Bayes factors for competing models. We find that the MCMC approach is able to correctly identify the number of signals present, extract the source parameters, and return error estimates consistent with Fisher information matrix predictions

  14. Reactor physics analysis method based on Monte Carlo homogenization

    Background: Many new concepts of nuclear energy systems with complicated geometric structures and diverse energy spectra have been put forward to meet the future demand of nuclear energy market. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multi-group cross section libraries. The Monte Carlo (MC) method predominates the suitability of geometry and spectrum, but faces the problems of long computation time and slow convergence. Purpose: This work aims to find a novel scheme to take the advantages of both methods drawn from the deterministic core analysis method and MC method. Methods: A new two-step core analysis scheme is proposed to combine the geometry modeling capability and continuous energy cross section libraries of MC method, as well as the higher computational efficiency of deterministic method. First of all, the MC simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Then, the core diffusion calculations can be done with these multi-group cross sections. Results: The new scheme can achieve high efficiency while maintain acceptable precision. Conclusion: The new scheme can be used as an effective tool for the design and analysis of innovative nuclear energy systems, which has been verified by numeric tests. (authors)

  15. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the

  16. COLLI-PTB, Neutron Fluence Spectra for 3-D Collimator System by Monte-Carlo

    1 - Description of program or function: For optimizing collimator systems (shieldings) for fast neutrons with energies between 10 KeV and 20 MeV. Only elastic and inelastic neutron scattering processes are involved. Isotropic angular distribution for inelastic scattering in the center of mass system is assumed. 2 - Method of solution: The Monte Carlo method with importance sampling technique, splitting and Russian Roulette is used. The neutron attenuation and scattering kinematics is taken into account. 3 - Restrictions on the complexity of the problem: Energy range from 10 KeV to 20 MeV. For the output spectra any bin width is possible. The output spectra are confined to 40 equidistant channels

  17. A Multivariate Time Series Method for Monte Carlo Reactor Analysis

    A robust multivariate time series method has been established for the Monte Carlo calculation of neutron multiplication problems. The method is termed Coarse Mesh Projection Method (CMPM) and can be implemented using the coarse statistical bins for acquisition of nuclear fission source data. A novel aspect of CMPM is the combination of the general technical principle of projection pursuit in the signal processing discipline and the neutron multiplication eigenvalue problem in the nuclear engineering discipline. CMPM enables reactor physicists to accurately evaluate major eigenvalue separations of nuclear reactors with continuous energy Monte Carlo calculation. CMPM was incorporated in the MCNP Monte Carlo particle transport code of Los Alamos National Laboratory. The great advantage of CMPM over the traditional Fission Matrix method is demonstrated for the three space-dimensional modeling of the initial core of a pressurized water reactor

  18. Analysis of error in Monte Carlo transport calculations

    The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table

  19. Monte Carlo Alpha Iteration Algorithm for a Subcritical System Analysis

    Hyung Jin Shim

    2015-01-01

    Full Text Available The α-k iteration method which searches the fundamental mode alpha-eigenvalue via iterative updates of the fission source distribution has been successfully used for the Monte Carlo (MC alpha-static calculations of supercritical systems. However, the α-k iteration method for the deep subcritical system analysis suffers from a gigantic number of neutron generations or a huge neutron weight, which leads to an abnormal termination of the MC calculations. In order to stably estimate the prompt neutron decay constant (α of prompt subcritical systems regardless of subcriticality, we propose a new MC alpha-static calculation method named as the α iteration algorithm. The new method is derived by directly applying the power method for the α-mode eigenvalue equation and its calculation stability is achieved by controlling the number of time source neutrons which are generated in proportion to α divided by neutron speed in MC neutron transport simulations. The effectiveness of the α iteration algorithm is demonstrated for two-group homogeneous problems with varying the subcriticality by comparisons with analytic solutions. The applicability of the proposed method is evaluated for an experimental benchmark of the thorium-loaded accelerator-driven system.

  20. Monte Carlo simulation for moment-independent sensitivity analysis

    The moment-independent sensitivity analysis (SA) is one of the most popular SA techniques. It aims at measuring the contribution of input variable(s) to the probability density function (PDF) of model output. However, compared with the variance-based one, robust and efficient methods are less available for computing the moment-independent SA indices (also called delta indices). In this paper, the Monte Carlo simulation (MCS) methods for moment-independent SA are investigated. A double-loop MCS method, which has the advantages of high accuracy and easy programming, is firstly developed. Then, to reduce the computational cost, a single-loop MCS method is proposed. The later method has several advantages. First, only a set of samples is needed for computing all the indices, thus it can overcome the problem of “curse of dimensionality”. Second, it is suitable for problems with dependent inputs. Third, it is purely based on model output evaluation and density estimation, thus can be used for model with high order (>2) interactions. At last, several numerical examples are introduced to demonstrate the advantages of the proposed methods.

  1. Further experience in Bayesian analysis using Monte Carlo Integration

    Dijk, Herman; Kloek, Teun

    1980-01-01

    textabstractAn earlier paper [Kloek and Van Dijk (1978)] is extended in three ways. First, Monte Carlo integration is performed in a nine-dimensional parameter space of Klein's model I [Klein (1950)]. Second, Monte Carlo is used as a tool for the elicitation of a uniform prior on a finite region by making use of several types of prior information. Third, special attention is given to procedures for the construction of importance functions which make use of nonlinear optimization methods. *1 T...

  2. Monte Carlo analysis of radiative transport in oceanographic lidar measurements

    Cupini, E.; Ferro, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy); Ferrari, N. [Bologna Univ., Bologna (Italy). Dipt. Ingegneria Energetica, Nucleare e del Controllo Ambientale

    2001-07-01

    The analysis of oceanographic lidar systems measurements is often carried out with semi-empirical methods, since there is only a rough understanding of the effects of many environmental variables. The development of techniques for interpreting the accuracy of lidar measurements is needed to evaluate the effects of various environmental situations, as well as of different experimental geometric configurations and boundary conditions. A Monte Carlo simulation model represents a tool that is particularly well suited for answering these important questions. The PREMAR-2F Monte Carlo code has been developed taking into account the main molecular and non-molecular components of the marine environment. The laser radiation interaction processes of diffusion, re-emission, refraction and absorption are treated. In particular are considered: the Rayleigh elastic scattering, produced by atoms and molecules with small dimensions with respect to the laser emission wavelength (i.e. water molecules), the Mie elastic scattering, arising from atoms or molecules with dimensions comparable to the laser wavelength (hydrosols), the Raman inelastic scattering, typical of water, the absorption of water, inorganic (sediments) and organic (phytoplankton and CDOM) hydrosols, the fluorescence re-emission of chlorophyll and yellow substances. PREMAR-2F is an extension of a code for the simulation of the radiative transport in atmospheric environments (PREMAR-2). The approach followed in PREMAR-2 was to combine conventional Monte Carlo techniques with analytical estimates of the probability of the receiver to have a contribution from photons coming back after an interaction in the field of view of the lidar fluorosensor collecting apparatus. This offers an effective mean for modelling a lidar system with realistic geometric constraints. The retrieved semianalytic Monte Carlo radiative transfer model has been developed in the frame of the Italian Research Program for Antarctica (PNRA) and it is

  3. A 3D photon superposition/convolution algorithm and its foundation on results of Monte Carlo calculations

    Ulmer, W.; Pyyry, J.; Kaissl, W.

    2005-04-01

    Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes densities <=0.25 g cm-3, in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. This work has partially been presented at WC 2003, Sydney.

  4. Fully 3D tomographic reconstruction by Monte Carlo simulation of the system matrix in preclinical PET with iodine 124

    Immuno-PET imaging can be used to assess the pharmacokinetic in radioimmunotherapy. When using iodine-124, PET quantitative imaging is limited by physics-based degrading factors within the detection system and the object, such as the long positron range in water and the complex spectrum of gamma photons. The objective of this thesis was to develop a fully 3D tomographic reconstruction method (S(MC)2PET) using Monte Carlo simulations for estimating the system matrix, in the context of preclinical imaging with iodine-124. The Monte Carlo simulation platform GATE was used for that respect. Several complexities of system matrices were calculated, with at least a model of the PET system response function. Physics processes in the object was either neglected or taken into account using a precise or a simplified object description. The impact of modelling refinement and statistical variance related to the system matrix elements was evaluated on final reconstructed images. These studies showed that a high level of complexity did not always improve qualitative and quantitative results, owing to the high-variance of the associated system matrices. (author)

  5. OMEGA, Subcritical and Critical Neutron Transport in General 3-D Geometry by Monte-Carlo

    1 - Description of problem or function: OMEGA is a Monte Carlo code for the solution of the stationary neutron transport equation with k-eff as the Eigenvalue. A three-dimensional geometry is permitted consisting of a very general arrangement of three basic shapes (columns with circular, rectangular, or hexagonal cross section with a finite height and different material layers along their axes). The main restriction is that all the basic shapes must have parallel axes. Most real arrangements of fissile material inside and outside a reactor (e.g., in a fuel storage or transport container) can be described without approximation. The main field of application is the estimation of criticality safety. Many years of experience and comparison with reference cases have shown that the code together with the built-in cross section libraries gives reliable results. The following results can be calculated: - the effective multiplication factor k-eff; - the flux distribution; - reaction rates; - spatially and energetically condensed cross sections for later use in a subsequent OMEGA run. A running job may be interrupted and continued later, possibly with an increased number of batches for an improved statistical accuracy. The geometry as well as the k-eff results may be visualized. The use of the code is demonstrated by many illustrating examples. 2 - Method of solution: The Monte Carlo method is used with neutrons starting from an initial source distribution. The histories of a generation (or batch) of neutrons are followed from collision to collision until the histories are terminated by capture, fission, or leakage. For the solution of the Eigenvalue problem, the starting positions of the neutrons for a given generation are determined by the fission points of the preceding generation. The summation of the results starts only after some initial generations when the spatial part of the fission source has converged. At present the code uses the BNAB-78 subgroup library of the

  6. Monte-Carlo Application for Nondestructive Nuclear Waste Analysis

    Carasco, C.; Engels, R.; Frank, M.; Furletov, S.; Furletova, J.; Genreith, C.; Havenith, A.; Kemmerling, G.; Kettler, J.; Krings, T.; Ma, J.-L.; Mauerhofer, E.; Neike, D.; Payan, E.; Perot, B.; Rossbach, M.; Schitthelm, O.; Schumann, M.; Vasquez, R.

    2014-06-01

    Radioactive waste has to undergo a process of quality checking in order to check its conformance with national regulations prior to its transport, intermediate storage and final disposal. Within the quality checking of radioactive waste packages non-destructive assays are required to characterize their radio-toxic and chemo-toxic contents. The Institute of Energy and Climate Research - Nuclear Waste Management and Reactor Safety of the Forschungszentrum Jülich develops in the framework of cooperation nondestructive analytical techniques for the routine characterization of radioactive waste packages at industrial-scale. During the phase of research and development Monte Carlo techniques are used to simulate the transport of particle, especially photons, electrons and neutrons, through matter and to obtain the response of detection systems. The radiological characterization of low and intermediate level radioactive waste drums is performed by segmented γ-scanning (SGS). To precisely and accurately reconstruct the isotope specific activity content in waste drums by SGS measurement, an innovative method called SGSreco was developed. The Geant4 code was used to simulate the response of the collimated detection system for waste drums with different activity and matrix configurations. These simulations allow a far more detailed optimization, validation and benchmark of SGSreco, since the construction of test drums covering a broad range of activity and matrix properties is time consuming and cost intensive. The MEDINA (Multi Element Detection based on Instrumental Neutron Activation) test facility was developed to identify and quantify non-radioactive elements and substances in radioactive waste drums. MEDINA is based on prompt and delayed gamma neutron activation analysis (P&DGNAA) using a 14 MeV neutron generator. MCNP simulations were carried out to study the response of the MEDINA facility in terms of gamma spectra, time dependence of the neutron energy spectrum

  7. A Markov chain Monte Carlo analysis of the CMSSM

    We perform a comprehensive exploration of the Constrained MSSM parameter space employing a Markov Chain Monte Carlo technique and a Bayesian analysis. We compute superpartner masses and other collider observables, as well as a cold dark matter abundance, and compare them with experimental data. We include uncertainties arising from theoretical approximations as well as from residual experimental errors of relevant Standard Model parameters. We delineate probability distributions of the CMSSM parameters, the collider and cosmological observables as well as a dark matter direct detection cross section. The 68% probability intervals of the CMSSM parameters are: 0.52TeV 1/2 0 0 g-tilde q-tildeR χ1± -9 s→μ+μ-) -8, 1.9 x 10-10 μSUSY -10 and 1 x 10-10 pb SIp -8 pb for direct WIMP detection. We highlight a complementarity between LHC and WIMP dark matter searches in exploring the CMSSM parameter space. We further expose a number of correlations among the observables, in particular between BR(Bs→μ+μ-) and BR(B-bar →Xsγ) or σSIp. Once SUSY is discovered, this and other correlations may prove helpful in distinguishing the CMSSM from other supersymmetric models. We investigate the robustness of our results in terms of the assumed ranges of CMSSM parameters and the effect of the (g-2)μ anomaly which shows some tension with the other observables. We find that the results for m0, and the observables which strongly depend on it, are sensitive to our assumptions, while our conclusions for the other variables are robust

  8. Benchmark analysis of the TRIGA MARK II research reactor using Monte Carlo techniques

    This study deals with the neutronic analysis of the current core configuration of a 3-MW TRIGA MARK II research reactor at Atomic Energy Research Establishment (AERE), Savar, Dhaka, Bangladesh and validation of the results by benchmarking with the experimental, operational and available Final Safety Analysis Report (FSAR) values. The 3-D continuous-energy Monte Carlo code MCNP4C was used to develop a versatile and accurate full-core model of the TRIGA core. The model represents in detail all components of the core with literally no physical approximation. All fresh fuel and control elements as well as the vicinity of the core were precisely described. Continuous energy cross-section data from ENDF/B-VI and ENDF/B-V and S(α,β) scattering functions from the ENDF/B-VI library were used. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics was established by benchmarking the TRIGA experiments. The effective multiplication factor, power distribution and peaking factors, neutron flux distribution, and reactivity experiments comprising control rod worths, critical rod height, excess reactivity and shutdown margin were used in the validation process. The MCNP predictions and the experimentally determined values are found to be in very good agreement, which indicates that the simulation of TRIGA reactor is treated adequately

  9. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    Pecchia, M.; D' Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  10. Incorporation of electron tunnelling phenomenon into 3D Monte Carlo simulation of electrical percolation in graphite nanoplatelet composites

    The percolation threshold problem in insulating polymers filled with exfoliated conductive graphite nanoplatelets (GNPs) is re-examined in this 3D Monte Carlo simulation study. GNPs are modelled as solid discs wrapped by electrically conductive layers of certain thickness which represent half of the electron tunnelling distance. Two scenarios of 'impenetrable' and 'penetrable' GNPs are implemented in the simulations. The percolation thresholds for both scenarios are plotted versus the electron tunnelling distance for various GNP thicknesses. The assumption of successful dispersion and exfoliation, and the incorporation of the electron tunnelling phenomenon in the impenetrable simulations suggest that the simulated percolation thresholds are lower bounds for any experimental study. Finally, the simulation results are discussed and compared with other experimental studies.

  11. TIMOC-72, 3-D Time-Dependent Homogeneous or Inhomogeneous Neutron Transport by Monte-Carlo

    1 - Nature of physical problem solved: TIMOC solves the energy and time dependent (or stationary) homogeneous or inhomogeneous neutron transport equation in three-dimensional geometries. The program can treat all commonly used scattering kernels, such as absorption, fission, isotropic and anisotropic elastic scattering, level excitation, the evaporation model, and the energy transfer matrix model, which includes (n,2n) reactions. The exchangeable geometry routines consist at present of (a) periodical multilayer slab, spherical and cylindrical lattices, (b) an elaborate three-dimensional cylindrical geometry which allows all kinds of subdivisions, (c) the very flexible O5R geometry routine which is able to describe any body combinations with surfaces of second order. The program samples the stationary or time-energy-region dependent fluxes as well as the transmission ratios between geometrical regions and the following integral quantities or eigenvalues, the leakage rate, the slowing down density, the production to source ratio, the multiplication factor based on flux and collision estimator, the mean production time, the mean destruction time, time distribution of production and destruction, the fission rates, the energy dependent absorption rates, the energy deposition due to elastic scattering for the different geometrical regions. 2 - Method of solution: TIMOC is a Monte Carlo program and uses several, partially optional variance reducing techniques, such as the method of expected values (weight factor), Russian roulette, the method of fractional generated neutrons, double sampling, semi-systematic sampling and the method of expected leakage probability. Within the neutron lifetime a discrete energy value is given after each collision process. The nuclear data input is however done by group averaged cross sections. The program can generate the neutron fluxes either resulting from an external source or in the form of fundamental mode distributions by a special

  12. MONTE CARLO SIMULATION APPLIED TO ECONOMIC AND FINANCIAL ANALYSIS OF AN AGRIBUSINESS PROJECT

    Danilo Simões; Lucas Raul Scherrer

    2014-01-01

    In practice, all management decisions involving an organization, regardless of size, have uncertainties which lead to different levels of risk. Monte Carlo simulation allows risk analysis by designing probabilistic models. From a deterministic model of economic viability indicators, commonly used for decision investment projects, it was developed a probabilistic model with Monte Carlo method simulations in order to carry out economic and financial analysis of an agroindustrial ...

  13. Effects of stochastic noise on a three-dimensional Monte Carlo depletion analysis of the H.B. Robinson reactor

    Monte Carlo depletion calculations for nuclear reactors are affected by the presence of stochastic noise in the local flux estimates produced during the calculation. The effects of this random noise and its propagation between timesteps during long depletion simulations are not well understood. To improve this understanding, a series of Monte Carlo depletion simulations have been conducted for a 3-D, eighth-core model of the H.B. Robinson PWR. The studies were performed by using the in-line depletion capability of the MC21 Monte Carlo code to produce multiple independent depletion simulations. Global and local results from each simulation are compared in order to determine the variance among the different depletion realizations. These comparisons indicate that global quantities, such as eigenvalue (keff), do not tend to diverge among the independent depletion calculations. However, local quantities, such as fuel concentration, can deviate wildly between independent depletion realizations, especially at high burnup levels. Analysis and discussion of the results from the study are provided, along with several new observations regarding the propagation of random noise during Monte Carlo depletion calculations. (author)

  14. Development of a 3D program for calculation of multigroup Dancoff factor based on Monte Carlo method in cylindrical geometry

    Highlights: • Code works based on Monte Carlo and escape probability methods. • Sensitivity of Dancoff factor to number of energy groups and type and arrangement of neighbor’s fuels is considered. • Sensitivity of Dancoff factor to control rod’s height is considered. • Dancoff factor high efficiency is achieved versus method sampling neutron flight direction from the fuel surface. • Sensitivity of K to Dancoff factor is considered. - Abstract: Evaluation of multigroup constants in reactor calculations depends on several parameters, the Dancoff factor amid them is used for calculation of the resonance integral as well as flux depression in the resonance region in the heterogeneous systems. This paper focuses on the computer program (MCDAN-3D) developed for calculation of the multigroup black and gray Dancoff factor in three dimensional geometry based on Monte Carlo and escape probability methods. The developed program is capable to calculate the Dancoff factor for an arbitrary arrangement of fuel rods with different cylindrical fuel dimensions and control rods with various lengths inserted in the reactor core. The initiative calculates the black and gray Dancoff factor versus generated neutron flux in cosine and constant shapes in axial fuel direction. The effects of clad and moderator are followed by studying of Dancoff factor’s sensitivity with variation of fuel arrangements and neutron’s energy group for CANDU37 and VVER1000 fuel assemblies. MCDAN-3D outcomes poses excellent agreement with the MCNPX code. The calculated Dancoff factors are then used for cell criticality calculations by the WIMS code

  15. Analytical band Monte Carlo analysis of electron transport in silicene

    Yeoh, K. H.; Ong, D. S.; Ooi, C. H. Raymond; Yong, T. K.; Lim, S. K.

    2016-06-01

    An analytical band Monte Carlo (AMC) with linear energy band dispersion has been developed to study the electron transport in suspended silicene and silicene on aluminium oxide (Al2O3) substrate. We have calibrated our model against the full band Monte Carlo (FMC) results by matching the velocity-field curve. Using this model, we discover that the collective effects of charge impurity scattering and surface optical phonon scattering can degrade the electron mobility down to about 400 cm2 V‑1 s‑1 and thereafter it is less sensitive to the changes of charge impurity in the substrate and surface optical phonon. We also found that further reduction of mobility to ∼100 cm2 V‑1 s‑1 as experimentally demonstrated by Tao et al (2015 Nat. Nanotechnol. 10 227) can only be explained by the renormalization of Fermi velocity due to interaction with Al2O3 substrate.

  16. Monte-Carlo application for nondestructive nuclear waste analysis

    The Institute of Energy and Climate Research - Nuclear Waste Management and Reactor Safety of the Forschungszentrum Juelich develops in the framework of cooperation nondestructive analytical techniques for the routine characterization of radioactive waste packages at industrial-scale. During the phase of research and development Monte Carlo techniques are used to simulate the transport of particle, especially photons, electrons and neutrons, through matter in order to obtain the response of detection systems

  17. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  18. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the

  19. Development of an analysis software for comparison between proton treatment planning system and Monte Carlo simulation

    Currently, many proton therapy facilities are used for radiotherapy for treating cancer. The main advantage of proton therapy is the absence of exit dose, which offers a highly conformal dose to treatment target as well as better normal organ sparing. The most of treatment planning system (TPS) in proton therapy calculates dose distribution using a pencil beam algorithm (PBA). PBA is suitable for clinical proton therapy because of the fast computation time. However PBA shows accuracy limitations mainly because of the one-dimensional density scaling of proton pencil beams in water. Recently, we developed Monte Carlo simulation tools for the design of proton therapy facility at National Cancer Center (NCC) using GEANT4 toolkit (version GEANT4.9.2p02). Monte Carlo simulation is expected to reproduce precise influences of complex geometry and material varieties which are difficult to introduce to the PBA. The data format of Monte Carlo simulation result has different from DICOM-RT. Consequently we need we analysis software for comparing between TPS and Monte Carlo simulation. The main objective of this research is to develop an analysis toolkit for verifying precision and accuracy of the proton treatment planning system and to analyze dose calculating algorithm of the proton therapy using Monte Carlo simulation. In this work, we conclude that we developed an analysis software for GEANT4-based medical application. This toolkit is capable of evaluating the accuracy of calculated dose by TPS with Monte Carlo simulation.

  20. Development of an analysis software for comparison between proton treatment planning system and Monte Carlo simulation

    Kim, Dae Hyun; Suh, Tae Suk [Dept. of Biomedical Engineering, College of Medicine, The Catholic University of Korea, Seoul (Korea, Republic of); Park, Sey Joon; Yoo, Seung Hoon; Lee, Se Byeong [Proton Therapy Center, National Cancer Center, Goyang (Korea, Republic of); Shin, Jung Wook [Dept. of Radiation Oncology, University of California, SanFrancisco (United States)

    2011-11-15

    Currently, many proton therapy facilities are used for radiotherapy for treating cancer. The main advantage of proton therapy is the absence of exit dose, which offers a highly conformal dose to treatment target as well as better normal organ sparing. The most of treatment planning system (TPS) in proton therapy calculates dose distribution using a pencil beam algorithm (PBA). PBA is suitable for clinical proton therapy because of the fast computation time. However PBA shows accuracy limitations mainly because of the one-dimensional density scaling of proton pencil beams in water. Recently, we developed Monte Carlo simulation tools for the design of proton therapy facility at National Cancer Center (NCC) using GEANT4 toolkit (version GEANT4.9.2p02). Monte Carlo simulation is expected to reproduce precise influences of complex geometry and material varieties which are difficult to introduce to the PBA. The data format of Monte Carlo simulation result has different from DICOM-RT. Consequently we need we analysis software for comparing between TPS and Monte Carlo simulation. The main objective of this research is to develop an analysis toolkit for verifying precision and accuracy of the proton treatment planning system and to analyze dose calculating algorithm of the proton therapy using Monte Carlo simulation. In this work, we conclude that we developed an analysis software for GEANT4-based medical application. This toolkit is capable of evaluating the accuracy of calculated dose by TPS with Monte Carlo simulation.

  1. Optimization of scintillation-detector timing systems using Monte Carlo analysis

    Monte Carlo analysis is used to model statistical noise associated with scintillation-detector photoelectron emissions and photomultiplier tube operation. Additionally, the impulse response of a photomultiplier tube, front-end amplifier, and constant-fraction discriminator (CFD) is modeled so the effects of front-end bandwidth and constant-fraction delay and fraction can be evaluated for timing-system optimizations. Such timing-system analysis is useful for detectors having low photo-electron-emission rates, including Bismuth Germanate (BGO) scintillation detectors used in Positron Emission Tomography (PET) systems. Monte Carlo timing resolution for a BGO / photomultiplier scintillation detector, CFD timing system is presented as a function of constant-fraction delay for 511-keV coincident gamma rays in the presence of Compton scatter. Monte Carlo results are in good agreement with measured results when a tri-exponential BGO scintillation model is used. Monte Carlo simulation is extended to include CFD energy-discrimination performance. Monte Carlo energy-discrimination performance is experimentally verified along with timing performance (Monte Carlo timing resolution of 3.22 ns FWHM versus measured resolution of 3.30 ns FWHM) for a front-end rise time of 10 ns (10--90%), CFD delay of 8 ns, and CFD fraction of 20%

  2. Monte-Carlo based uncertainty analysis: Sampling efficiency and sampling convergence

    Monte Carlo analysis has become nearly ubiquitous since its introduction, now over 65 years ago. It is an important tool in many assessments of the reliability and robustness of systems, structures or solutions. As the deterministic core simulation can be lengthy, the computational costs of Monte Carlo can be a limiting factor. To reduce that computational expense as much as possible, sampling efficiency and convergence for Monte Carlo are investigated in this paper. The first section shows that non-collapsing space-filling sampling strategies, illustrated here with the maximin and uniform Latin hypercube designs, highly enhance the sampling efficiency, and render a desired level of accuracy of the outcomes attainable with far lesser runs. In the second section it is demonstrated that standard sampling statistics are inapplicable for Latin hypercube strategies. A sample-splitting approach is put forward, which in combination with a replicated Latin hypercube sampling allows assessing the accuracy of Monte Carlo outcomes. The assessment in turn permits halting the Monte Carlo simulation when the desired levels of accuracy are reached. Both measures form fairly noncomplex upgrades of the current state-of-the-art in Monte-Carlo based uncertainty analysis but give a substantial further progress with respect to its applicability.

  3. pyNSMC: A Python Module for Null-Space Monte Carlo Uncertainty Analysis

    White, J.; Brakefield, L. K.

    2015-12-01

    The null-space monte carlo technique is a non-linear uncertainty analyses technique that is well-suited to high-dimensional inverse problems. While the technique is powerful, the existing workflow for completing null-space monte carlo is cumbersome, requiring the use of multiple commandline utilities, several sets of intermediate files and even a text editor. pyNSMC is an open-source python module that automates the workflow of null-space monte carlo uncertainty analyses. The module is fully compatible with the PEST and PEST++ software suites and leverages existing functionality of pyEMU, a python framework for linear-based uncertainty analyses. pyNSMC greatly simplifies the existing workflow for null-space monte carlo by taking advantage of object oriented design facilities in python. The core of pyNSMC is the ensemble class, which draws and stores realized random vectors and also provides functionality for exporting and visualizing results. By relieving users of the tedium associated with file handling and command line utility execution, pyNSMC instead focuses the user on the important steps and assumptions of null-space monte carlo analysis. Furthermore, pyNSMC facilitates learning through flow charts and results visualization, which are available at many points in the algorithm. The ease-of-use of the pyNSMC workflow is compared to the existing workflow for null-space monte carlo for a synthetic groundwater model with hundreds of estimable parameters.

  4. Finite-Time Analysis of Stratified Sampling for Monte Carlo

    Carpentier, Alexandra; Munos, Rémi

    2011-01-01

    International audience We consider the problem of stratified sampling for Monte-Carlo integration. We model this problem in a multi-armed bandit setting, where the arms represent the strata, and the goal is to estimate a weighted average of the mean values of the arms. We propose a strategy that samples the arms according to an upper bound on their standard deviations and compare its estimation quality to an ideal allocation that would know the standard deviations of the strata. We provide...

  5. IM3D: A parallel Monte Carlo code for efficient simulations of primary radiation displacements and damage in 3D geometry

    Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju

    2015-12-01

    SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed.

  6. Stratospheric trace gases from SCIAMACHY limb measurements using 3D full spherical Monte Carlo radiative transfer model Tracy-II

    Pukite, Janis [Max- Planck-Institut fuer Chemie, Mainz (Germany); Institute of Atomic Physics and Spectroscopy, University of Latvia (Latvia); Kuehl, Sven; Wagner, Thomas [Max- Planck-Institut fuer Chemie, Mainz (Germany); Deutschmann, Tim; Platt, Ulrich [Institut fuer Umweltphysik, University of Heidelberg (Germany)

    2007-07-01

    A two step method for the retrieval of stratospheric trace gases (NO{sub 2}, BrO, OClO) from SCIAMACHY limb observations in the UV/VIS spectral region is presented: First, DOAS is applied on the spectra, yielding slant column densities (SCDs) of the respective trace gases. Second, the SCDs are converted into vertical concentration profiles applying radiative transfer modeling. The Monte Carlo method benefits from conceptual simplicity and allows realizing the concept of full spherical geometry of the atmosphere and also its 3D properties, which are important for a realistic description of the limb geometry. The implementation of a 3D box air mass factor concept allows accounting for horizontal gradients of trace gases. An important point is the effect of horizontal gradients on the profile inversion. This is of special interest in Polar Regions, where the Sun elevation is typically low and photochemistry can highly vary along the long absorption paths. We investigate the influence of horizontal gradients by applying 3-dimensional radiative transfer modelling.

  7. An Advanced Neutronic Analysis Toolkit with Inline Monte Carlo capability for VHTR Analysis

    Monte Carlo capability has been combined with a production LWR lattice physics code to allow analysis of high temperature gas reactor configurations, accounting for the double heterogeneity due to the TRISO fuel. The Monte Carlo code MCNP5 has been used in conjunction with CPM3, which was the testbench lattice physics code for this project. MCNP5 is used to perform two calculations for the geometry of interest, one with homogenized fuel compacts and the other with heterogeneous fuel compacts, where the TRISO fuel kernels are resolved by MCNP5.

  8. Shape analysis of blocking dips: Monte Carlo vs. analytical results

    Angular blocking dips around the axis in Al single crystal of α-particles of about 2 MeV produced at a depth of 0.2 μm are calculated for several values of the mean transverse displacement v perpendicular to tau of the decaying nucleus within the range 0 <= v perpendicular to tau <= 260 pm. Calculations have been made both by an extensive multistring Monte Carlo simulation and by a continuum model with diffusion. As far as the Monte Carlo method is concerned, the influence of the (small) solid angle of particles emission and of the 'single interaction' approximation has been investigated. The analytical calculations performed on the basis of a Moliere (thermally averaged) multistring potential show, for large v perpendicular to tau, a clear dependence of the blocking dips on the recoil direction and a sharp peak at very small angles. The shapes of the dips obtained by the two methods are in overall good agreement while a very satisfactory comparison has been found for the dip widths and the relative parameters used in many lifetime measurements. (author)

  9. Correlating variability of modeling parameters with non-isothermal stack performance: Monte Carlo simulation of a portable 3D planar solid oxide fuel cell stack

    Highlights: • A Monte Carlo simulation of a SOFC stack model is conducted for sensitivity analysis. • The non-isothermal stack model allows fast computation for statistical modeling. • Modeling parameters are ranked in view of their correlations with stack performance. • Rankings are different when varying the parameters simultaneously and individually. • Rankings change with the variability of the parameters and positions in the stack. - Abstract: The development of fuel cells has progressed to portable applications recently. This paper conducts a Monte Carlo simulation (MCS) of a spatially-smoothed non-isothermal model to correlate the performance of a 3D 5-cell planar solid oxide fuel cell (P-SOFC) stack with the variability of modeling parameters regarding material and geometrical properties and operating conditions. The computationally cost-efficient P-SOFC model for the MCS captures the leading-order transport phenomena and electrochemical mechanics of the 3D stack. Sensitivity analysis is carried out in two scenarios: first, by varying modeling parameters individually, and second by varying them simultaneously. The stochastic parameters are ranked according to the strength of their correlations with global and local stack performances. As a result, different rankings are obtained for the two scenarios. Moreover, in the second scenario, the rankings change with the nominal values and variability of the stochastic parameters as well as local positions within the stack, because of compensating or reinforcing effects between the varying parameters. Apart from the P-SOFCs, the present MCS can be extended to other types of fuel cells equipped with parallel flow channels. The fast stack model allows statistical modeling of a large stack of hundreds of cells for high-power applications without a prohibitive computational cost

  10. 3D imaging using combined neutron-photon fan-beam tomography: A Monte Carlo study.

    Hartman, J; Yazdanpanah, A Pour; Barzilov, A; Regentova, E

    2016-05-01

    The application of combined neutron-photon tomography for 3D imaging is examined using MCNP5 simulations for objects of simple shapes and different materials. Two-dimensional transmission projections were simulated for fan-beam scans using 2.5MeV deuterium-deuterium and 14MeV deuterium-tritium neutron sources, and high-energy X-ray sources, such as 1MeV, 6MeV and 9MeV. Photons enable assessment of electron density and related mass density, neutrons aid in estimating the product of density and material-specific microscopic cross section- the ratio between the two provides the composition, while CT allows shape evaluation. Using a developed imaging technique, objects and their material compositions have been visualized. PMID:26953978

  11. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials

  12. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    Li, Ming; Huang, Xiaobo; Kang, Zhan

    2015-08-01

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials.

  13. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    Li, Ming; Kang, Zhan, E-mail: zhankang@dlut.edu.cn [State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian 116024 (China); Huang, Xiaobo [Suzhou Nuclear Power Research Institute, Suzhou 215000 (China)

    2015-08-28

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials.

  14. General purpose dynamic Monte Carlo with continuous energy for transient analysis

    Sjenitzer, B. L.; Hoogenboom, J. E. [Delft Univ. of Technology, Dept. of Radiation, Radionuclide and Reactors, Mekelweg 15, 2629JB Delft (Netherlands)

    2012-07-01

    For safety assessments transient analysis is an important tool. It can predict maximum temperatures during regular reactor operation or during an accident scenario. Despite the fact that this kind of analysis is very important, the state of the art still uses rather crude methods, like diffusion theory and point-kinetics. For reference calculations it is preferable to use the Monte Carlo method. In this paper the dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli4. Also, the method is extended for use with continuous energy. The first results of Dynamic Tripoli demonstrate that this kind of calculation is indeed accurate and the results are achieved in a reasonable amount of time. With the method implemented in Tripoli it is now possible to do an exact transient calculation in arbitrary geometry. (authors)

  15. Image quality assessment of LaBr3-based whole-body 3D PET scanners: a Monte Carlo evaluation

    The main thrust for this work is the investigation and design of a whole-body PET scanner based on new lanthanum bromide scintillators. We use Monte Carlo simulations to generate data for a 3D PET scanner based on LaBr3 detectors, and to assess the count-rate capability and the reconstructed image quality of phantoms with hot and cold spheres using contrast and noise parameters. Previously we have shown that LaBr3 has very high light output, excellent energy resolution and fast timing properties which can lead to the design of a time-of-flight (TOF) whole-body PET camera. The data presented here illustrate the performance of LaBr3 without the additional benefit of TOF information, although our intention is to develop a scanner with TOF measurement capability. The only drawbacks of LaBr3 are the lower stopping power and photo-fraction which affect both sensitivity and spatial resolution. However, in 3D PET imaging where energy resolution is very important for reducing scattered coincidences in the reconstructed image, the image quality attained in a non-TOF LaBr3 scanner can potentially equal or surpass that achieved with other high sensitivity scanners. Our results show that there is a gain in NEC arising from the reduced scatter and random fractions in a LaBr3 scanner. The reconstructed image resolution is slightly worse than a high-Z scintillator, but at increased count-rates, reduced pulse pileup leads to an image resolution similar to that of LSO. Image quality simulations predict reduced contrast for small hot spheres compared to an LSO scanner, but improved noise characteristics at similar clinical activity levels

  16. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  17. The present of shielding analysis with nuclear data for continuous energy Monte Carlo code MCNP

    Following three problems are analyzed by continuous energy Monte Carlo code MCNP with JENDL-3.2, 3.3, and ENDF/B-VI. 1. Shielding analysis of WINFRITH-Aspins iron deep penetration experiment. 2. Shielding analysis of TN-12A spent fuel transport cask experiment. 3. Shielding analysis of modular shielding house keeping spent fuel transportable casks. (author)

  18. Reliability analysis of tunnel surrounding rock stability by Monte-Carlo method

    XI Jia-mi; YANG Geng-she

    2008-01-01

    Discussed advantages of improved Monte-Carlo method and feasibility aboutproposed approach applying in reliability analysis for tunnel surrounding rock stability. Onthe basis of deterministic parsing for tunnel surrounding rock, reliability computing methodof surrounding rock stability was derived from improved Monte-Carlo method. The com-puting method considered random of related parameters, and therefore satisfies relativityamong parameters. The proposed method can reasonably determine reliability of sur-rounding rock stability. Calculation results show that this method is a scientific method indiscriminating and checking surrounding rock stability.

  19. Towards integration of compositional risk analysis using Monte Carlo simulation and security testing

    Viehmann, Johannes

    2014-01-01

    This short paper describes ongoing efforts to combine concepts of security risk analysis with security testing into a single process. Using risk analysis artefact composition and Monte Carlo simulation to calculate likelihood values, the method described here is intended to become applicable for complex large scale systems with dynamically changing probability values.

  20. A Markov Chain Monte Carlo Approach to Confirmatory Item Factor Analysis

    Edwards, Michael C.

    2010-01-01

    Item factor analysis has a rich tradition in both the structural equation modeling and item response theory frameworks. The goal of this paper is to demonstrate a novel combination of various Markov chain Monte Carlo (MCMC) estimation routines to estimate parameters of a wide variety of confirmatory item factor analysis models. Further, I show…

  1. Determining the Number of Principal Components to Retain via Parallel Analysis: Alternatives to Monte Carlo Analyses.

    Lautenschlager, Gary J.

    The parallel analysis method for determining the number of components to retain in a principal components analysis has received a recent resurgence of support and interest. However, researchers and practitioners desiring to use this criterion have been hampered by the required Monte Carlo analyses needed to develop the criteria. Two recent…

  2. Construction of the quantitative analysis environment using Monte Carlo simulation

    The thoracic phantom image was acquisitioned of the axial section to construct maps of the source and density with Monte Carlo (MC) simulation. The phantom was Heart/Liver Type HL (Kyoto Kagaku Co., Ltd.) single photon emission CT (SPECT)/CT machine was Symbia T6 (Siemence) with the collimator LMEGP (low-medium energy general purpose). Maps were constructed from CT images with an in-house software using Visual studio C Sharp (Microsoft). The code simulation of imaging nuclear detectors (SIMIND) was used for MC simulation, Prominence processor (Nihon Medi-Physics) for filter processing and image reconstruction, and the environment DELL Precision T7400 for all image processes. For the actual experiment, the phantom was given 15 MBq of 99mTc assuming the uptake 2% at the dose of 740 MBq in its myocardial portion and SPECT image was acquisitioned and reconstructed with Butter-worth filter and filter back projection method. CT images were similarly obtained in 0.3 mm thick slices, which were filed in one formatted with digital imaging and communication in medicine (DICOM), and then processed for application to SIMIND for mapping the source and density. Physical and mensuration factors were examined in ideal images by sequential exclusion and simulation of those factors as attenuation, scattering, spatial resolution deterioration and statistical fluctuation. Gamma energy spectrum, SPECT projection and reconstructed images given by the simulation were found to well agree with the actual data, and the precision of MC simulation was confirmed. Physical and mensuration factors were found to be evaluable individually, suggesting the usefulness of the simulation for assessing the precision of their correction. (T.T.)

  3. The seasonal KPSS test when neglecting seasonal dummies: a Monte Carlo analysis

    El Montasser, Ghassen; Boufateh, Talel; Issaoui, Fakhri

    2013-01-01

    This paper shows through a Monte Carlo analysis the effect of neglecting seasonal deterministics on the seasonal KPSS test. We found that the test is most of the time heavily oversized and not convergent in this case. In addition, Bartlett-type non-parametric correction of error variances did not signally change the test's rejection frequencies.

  4. Taxometrics, Polytomous Constructs, and the Comparison Curve Fit Index: A Monte Carlo Analysis

    Walters, Glenn D.; McGrath, Robert E.; Knight, Raymond A.

    2010-01-01

    The taxometric method effectively distinguishes between dimensional (1-class) and taxonic (2-class) latent structure, but there is virtually no information on how it responds to polytomous (3-class) latent structure. A Monte Carlo analysis showed that the mean comparison curve fit index (CCFI; Ruscio, Haslam, & Ruscio, 2006) obtained with 3…

  5. Generalization of Markov Monte Carlo reliability analysis to include non-Markovian maintenance strategies

    The Lagrangian approach to Markov Monte Carlo methods for systems reliability analysis is generalized to include non-Markovian phenomena in which system components are replaced. The method is then employed to analyze the unreliability and unavailability of a number of redundant systems in which maintenance is carried out by batch or time replacement of aging components. (orig.)

  6. Performance analysis for neutronics benchmark experiments with partial adjoint contribution estimated by forward Monte Carlo calculation

    Highlights: • Performance estimation of nuclear-data benchmark was investigated. • Point detector contribution played a benchmark role not only to the neutron producing the detector contribution but also equally to all the upstream transport neutrons. • New functions were defined to give how well the contribution could be interpreted for benchmarking. • Benchmark performance could be evaluated only by a forward Monte Carlo calculation. -- Abstract: The author's group has been investigating how the performance estimation of nuclear-data benchmark using experiment and its analysis by Monte Carlo code should be carried out especially at 14 MeV. We have recently found that a detector contribution played a benchmark role not only to the neutron producing the detector contribution but also equally to all the upstream neutrons during the neutron history. This result would propose that the benchmark performance could be evaluated only by a forward Monte Carlo calculation. In this study, we thus defined new functions to give how well the contribution could be utilized for benchmarking using the point detector, and described that it was deeply related to the newly introduced “partial adjoint contribution”. By preparing these functions before benchmark experiments, one could know beforehand how well and for which nuclear data the experiment results could do benchmarking in forward Monte Carlo calculations

  7. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    Hanson, J. M.; Beard, B. B.

    2010-01-01

    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.

  8. MCMini: Monte Carlo on GPGPU

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  9. New strategies of sensitivity analysis capabilities in continuous-energy Monte Carlo code RMC

    Highlights: • Data decomposition techniques are proposed for memory reduction. • New strategies are put forward and implemented in RMC code to improve efficiency and accuracy for sensitivity calculations. • A capability to compute region-specific sensitivity coefficients is developed in RMC code. - Abstract: The iterated fission probability (IFP) method has been demonstrated to be an accurate alternative for estimating the adjoint-weighted parameters in continuous-energy Monte Carlo forward calculations. However, the memory requirements of this method are huge especially when a large number of sensitivity coefficients are desired. Therefore, data decomposition techniques are proposed in this work. Two parallel strategies based on the neutron production rate (NPR) estimator and the fission neutron population (FNP) estimator for adjoint fluxes, as well as a more efficient algorithm which has multiple overlapping blocks (MOB) in a cycle, are investigated and implemented in the continuous-energy Reactor Monte Carlo code RMC for sensitivity analysis. Furthermore, a region-specific sensitivity analysis capability is developed in RMC. These new strategies, algorithms and capabilities are verified against analytic solutions of a multi-group infinite-medium problem and against results from other software packages including MCNP6, TSUANAMI-1D and multi-group TSUNAMI-3D. While the results generated by the NPR and FNP strategies agree within 0.1% of the analytic sensitivity coefficients, the MOB strategy surprisingly produces sensitivity coefficients exactly equal to the analytic ones. Meanwhile, the results generated by the three strategies in RMC are in agreement with those produced by other codes within a few percent. Moreover, the MOB strategy performs the most efficient sensitivity coefficient calculations (offering as much as an order of magnitude gain in FoMs over MCNP6), followed by the NPR and FNP strategies, and then MCNP6. The results also reveal that these

  10. 3D Monte-Carlo transport calculations of whole slab reactor cores: validation of deterministic neutronic calculation routes

    Palau, J.M. [CEA Cadarache, Service de Physique des Reacteurs et du Cycle, Lab. de Projets Nucleaires, 13 - Saint-Paul-lez-Durance (France)

    2005-07-01

    This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U{sup 235}, U{sup 238}, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)

  11. 3D Monte-Carlo transport calculations of whole slab reactor cores: validation of deterministic neutronic calculation routes

    This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U235, U238, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)

  12. The Null Space Monte Carlo Uncertainty Analysis of Heterogeneity for Preferential Flow Simulation

    Ghasemizade, M.; Radny, D.; Schirmer, M.

    2014-12-01

    Preferential flow paths can have a huge impact on the amount and time of runoff generation, particularly in areas where subsurface flow dominates this process. In order to simulate preferential flow mechanisms, many different approaches have been suggested. However, the efficiency of such approaches are rarely investigated in a predictive sense. The main reason is that the models which are used to simulate preferential flows require many parameters. This can lead to a dramatic increase of model run times, especially in the context of highly nonlinear models which themselves are demanding. We attempted in this research to simulate the daily recharge values of a weighing lysimeter, including preferential flows, with the 3-D physically based model HydroGeoSphere. To accomplish that, we used the matrix pore concept with varying hydraulic conductivities within the lysimeter to represent heterogeneity. It was assumed that spatially correlated heterogeneity is the main driver of triggering preferential flow paths. In order to capture the spatial distribution of hydraulic conductivity values we used pilot points and geostatistical model structures. Since hydraulic conductivity values at each pilot point are functioning as parameters, the model is a highly parameterized one. Due to this fact, we used the robust and newly developed method of null space Monte Carlo for analyzing the uncertainty of the model outputs. Results of the uncertainty analysis show that the method of pilot points is reliable in order to represent preferential flow paths.

  13. Comparison of 3D and 4D Monte Carlo optimization in robotic tracking stereotactic body radiotherapy of lung cancer

    Chan, Mark K.H. [Tuen Mun Hospital, Department of Clinical Oncology, Hong Kong (S.A.R) (China); Werner, Rene [The University Medical Center Hamburg-Eppendorf, Department of Computational Neuroscience, Hamburg (Germany); Ayadi, Miriam [Leon Berard Cancer Center, Department of Radiation Oncology, Lyon (France); Blanck, Oliver [University Clinic of Schleswig-Holstein, Department of Radiation Oncology, Luebeck (Germany); CyberKnife Center Northern Germany, Guestrow (Germany)

    2014-09-20

    To investigate the adequacy of three-dimensional (3D) Monte Carlo (MC) optimization (3DMCO) and the potential of four-dimensional (4D) dose renormalization (4DMC{sub renorm}) and optimization (4DMCO) for CyberKnife (Accuray Inc., Sunnyvale, CA) radiotherapy planning in lung cancer. For 20 lung tumors, 3DMCO and 4DMCO plans were generated with planning target volume (PTV{sub 5} {sub mm}) = gross tumor volume (GTV) plus 5 mm, assuming 3 mm for tracking errors (PTV{sub 3} {sub mm}) and 2 mm for residual organ deformations. Three fractions of 60 Gy were prescribed to ≥ 95 % of the PTV{sub 5} {sub mm}. Each 3DMCO plan was recalculated by 4D MC dose calculation (4DMC{sub recal}) to assess the dosimetric impact of organ deformations. The 4DMC{sub recal} plans were renormalized (4DMC{sub renorm}) to 95 % dose coverage of the PTV{sub 5} {sub mm} for comparisons with the 4DMCO plans. A 3DMCO plan was considered adequate if the 4DMC{sub recal} plan showed ≥ 95 % of the PTV{sub 3} {sub mm} receiving 60 Gy and doses to other organs at risk (OARs) were below the limits. In seven lesions, 3DMCO was inadequate, providing < 95 % dose coverage to the PTV{sub 3} {sub mm}. Comparison of 4DMC{sub recal} and 3DMCO plans showed that organ deformations resulted in lower OAR doses. Renormalizing the 4DMC{sub recal} plans could produce OAR doses higher than the tolerances in some 4DMC{sub renorm} plans. Dose conformity of the 4DMC{sub renorm} plans was inferior to that of the 3DMCO and 4DMCO plans. The 4DMCO plans did not always achieve OAR dose reductions compared to 3DMCO and 4DMC{sub renorm} plans. This study indicates that 3DMCO with 2 mm margins for organ deformations may be inadequate for Cyberknife-based lung stereotactic body radiotherapy (SBRT). Renormalizing the 4DMC{sub recal} plans could produce degraded dose conformity and increased OAR doses; 4DMCO can resolve this problem. (orig.) [German] Untersucht wurde die Angemessenheit einer dreidimensionalen (3-D) Monte-Carlo

  14. The Monte Carlo atmospheric radiative transfer model McArtim: Introduction and validation of Jacobians and 3D features

    A new Monte Carlo atmospheric radiative transfer model is presented which is designed to support the interpretation of UV/vis/near-IR spectroscopic measurements of scattered Sun light in the atmosphere. The integro differential equation describing the underlying transport process and its formal solution are discussed. A stochastic approach to solve the differential equation, the Monte Carlo method, is deduced and its application to the formal solution is demonstrated. It is shown how model photon trajectories of the resulting ray tracing algorithm are used to estimate functionals of the radiation field such as radiances, actinic fluxes and light path integrals. In addition, Jacobians of the former quantities with respect to optical parameters of the atmosphere are analyzed. Model output quantities are validated against measurements, by self-consistency tests and through inter comparisons with other radiative transfer models.

  15. Statistical analysis and Monte Carlo simulation of growing self-avoiding walks on percolation

    The two-dimensional growing self-avoiding walk on percolation was investigated by statistical analysis and Monte Carlo simulation. We obtained the expression of the mean square displacement and effective exponent as functions of time and percolation probability by statistical analysis and made a comparison with simulations. We got a reduced time to scale the motion of walkers in growing self-avoiding walks on regular and percolation lattices

  16. The energy analysis for the monte carlo simulations of a diffusive shock

    Wang, Xin; Yan, Yihua

    2011-01-01

    According to the shock jump conditions, the total fluid's mass, momentum, and energy should be conserved in the entire simulation box. We perform the dynamical Monte Carlo simulations with the multiple scattering law for energy analysis. The various energy functions of time are obtained by monitoring the total particles' mass, momentum, and energy in the simulation box. In conclusion, the energy analysis indicates that the smaller energy losses in the prescribed scattering law are, the harder...

  17. Uncertainty Analysis of Power Grid Investment Capacity Based on Monte Carlo

    Qin, Junsong; Liu, Bingyi; Niu, Dongxiao

    By analyzing the influence factors of the investment capacity of power grid, to depreciation cost, sales price and sales quantity, net profit, financing and GDP of the second industry as the dependent variable to build the investment capacity analysis model. After carrying out Kolmogorov-Smirnov test, get the probability distribution of each influence factor. Finally, obtained the grid investment capacity uncertainty of analysis results by Monte Carlo simulation.

  18. Risk Analysis of Tilapia Recirculating Aquaculture Systems: A Monte Carlo Simulation Approach

    Kodra, Bledar

    2007-01-01

    Risk Analysis of Tilapia Recirculating Aquaculture Systems: A Monte Carlo Simulation Approach Bledar Kodra (ABSTRACT) The purpose of this study is to modify an existing static analytical model developed for a Re-circulating Aquaculture Systems through incorporation of risk considerations to evaluate the economic viability of the system. In addition the objective of this analysis is to provide a well documented risk based analytical system so that individuals (investors/lenders) c...

  19. Modeling Elicitation effects in contingent valuation studies: a Monte Carlo Analysis of the bivariate approach

    Genius, Margarita; Strazzera, Elisabetta

    2005-01-01

    A Monte Carlo analysis is conducted to assess the validity of the bivariate modeling approach for detection and correction of different forms of elicitation effects in Double Bound Contingent Valuation data. Alternative univariate and bivariate models are applied to several simulated data sets, each one characterized by a specific elicitation effect, and their performance is assessed using standard selection criteria. The bivariate models include the standard Bivariate Probit model, and an al...

  20. Risk analysis and Monte Carlo simulation applied to the generation of drilling AFE estimates

    This paper presents a method for developing an authorization-for-expenditure (AFE)-generating model and illustrates the technique with a specific offshore field development case study. The model combines Monte Carlo simulation and statistical analysis of historical drilling data to generate more accurate, risked, AFE estimates. In addition to the general method, two examples of making AFE time estimates for North Sea wells with the presented techniques are given

  1. Timing resolution of scintillation-detector systems: a Monte Carlo analysis

    Choong, Woon-Seng

    2009-01-01

    Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use a Monte Carlo analysis to model the physi...

  2. A Monte Carlo computer program for analysis of backscattering and sputtering in practical vacuum systems

    A Monte Carlo computer program originally developed for analysis of molecular gas flow in axi-symmetric vacuum systems has been extended to include modelling of high energy backscattering and sputtering processes. This report describes the input data required by the computer program together with the results produced. A general description is given of the program operation and the backscattering and sputtering modelling used. An example calculation is included to illustrate practical application of the program. (author)

  3. Present status of Monte Carlo seminar for sub-criticality safety analysis in Japan

    This paper provides overview of the methods and results of a series of sub-criticality safety analysis seminars for nuclear fuel cycle facility with the Monte Carlo method held in Japan from July 2000 to July 2003. In these seminars, MCNP-4C2 system (MS-DOS version) was installed in note-type personal computers for participants. Fundamental theory of reactor physics and Monte Carlo simulation as well as the contents of the MCNP manual were lectured. Effective neutron multiplication factors and neutron spectra were calculated for some examples such as JCO deposit tank, JNC uranium solution storage tank, JNC plutonium solution storage tank and JAERI TCA core. Management for safety of nuclear fuel cycle facilities was discussed in order to prevent criticality accidents in some of the seminars. (author)

  4. A study on the radioactivity analysis of decommissioning concrete using Monte Carlo simulation

    In order to decommission the shielding concrete of KRR(Korea Research Reactor)-1 and 2, it must be exactly determined activated level and range by neutron irradiation during operation. To determine the activated level and range, it must be sampled and analyzed the core sample. But, there are difficulties in sample preparation and determination of the measurement efficiency by self-absorption. In the study, the full energy efficiency of the HPGe detector was compared with the measured value using standard source and the calculated one using Monte Carlo simulation. Also, self-absorption effects due to the density and component change of the concrete were calculated using the Monte Carlo method. Its results will be used radioactivity analysis of the real concrete core sample in the future

  5. A study on the radioactivity analysis of decommissioning concrete using Monte Carlo simulation

    Seo, Bum Kyoung; Kim, Gye Hong; Chung, Un Soo; Lee, Keun Woo; Oh, Won Zin; Park, Jin Ho [KAERI, Taejon (Korea, Republic of)

    2004-07-01

    In order to decommission the shielding concrete of KRR(Korea Research Reactor)-1 and 2, it must be exactly determined activated level and range by neutron irradiation during operation. To determine the activated level and range, it must be sampled and analyzed the core sample. But, there are difficulties in sample preparation and determination of the measurement efficiency by self-absorption. In the study, the full energy efficiency of the HPGe detector was compared with the measured value using standard source and the calculated one using Monte Carlo simulation. Also, self-absorption effects due to the density and component change of the concrete were calculated using the Monte Carlo method. Its results will be used radioactivity analysis of the real concrete core sample in the future.

  6. Monte Carlo Calculation for Landmine Detection using Prompt Gamma Neutron Activation Analysis

    Park, Seungil; Kim, Seong Bong; Yoo, Suk Jae [Plasma Technology Research Center, Gunsan (Korea, Republic of); Shin, Sung Gyun; Cho, Moohyun [POSTECH, Pohang (Korea, Republic of); Han, Seunghoon; Lim, Byeongok [Samsung Thales, Yongin (Korea, Republic of)

    2014-05-15

    Identification and demining of landmines are a very important issue for the safety of the people and the economic development. To solve the issue, several methods have been proposed in the past. In Korea, National Fusion Research Institute (NFRI) is developing a landmine detector using prompt gamma neutron activation analysis (PGNAA) as a part of the complex sensor-based landmine detection system. In this paper, the Monte Carlo calculation results for this system are presented. Monte Carlo calculation was carried out for the design of the landmine detector using PGNAA. To consider the soil effect, average soil composition is analyzed and applied to the calculation. This results has been used to determine the specification of the landmine detector.

  7. Mercury + VisIt: Integration of a Real-Time Graphical Analysis Capability into a Monte Carlo Transport Code

    Validation of the problem definition and analysis of the results (tallies) produced during a Monte Carlo particle transport calculation can be a complicated, time-intensive processes. The time required for a person to create an accurate, validated combinatorial geometry (CG) or mesh-based representation of a complex problem, free of common errors such as gaps and overlapping cells, can range from days to weeks. The ability to interrogate the internal structure of a complex, three-dimensional (3-D) geometry, prior to running the transport calculation, can improve the user's confidence in the validity of the problem definition. With regard to the analysis of results, the process of extracting tally data from printed tables within a file is laborious and not an intuitive approach to understanding the results. The ability to display tally information overlaid on top of the problem geometry can decrease the time required for analysis and increase the user's understanding of the results. To this end, our team has integrated VisIt, a parallel, production-quality visualization and data analysis tool into Mercury, a massively-parallel Monte Carlo particle transport code. VisIt provides an API for real time visualization of a simulation as it is running. The user may select which plots to display from the VisIt GUI, or by sending VisIt a Python script from Mercury. The frequency at which plots are updated can be set and the user can visualize the simulation results as it is running

  8. Number of iterations needed in Monte Carlo Simulation using reliability analysis for tunnel supports

    E. Bukaçi

    2016-06-01

    Full Text Available There are many methods in geotechnical engineering which could take advantage of Monte Carlo Simulation to establish probability of failure, since closed form solutions are almost impossible to use in most cases. The problem that arises with using Monte Carlo Simulation is the number of iterations needed for a particular simulation.This article will show why it’s important to calculate number of iterations needed for Monte Carlo Simulation used in reliability analysis for tunnel supports using convergence – confinement method. Number if iterations needed will be calculated with two methods. In the first method, the analyst has to accept a distribution function for the performance function. The other method suggested by this article is to calculate number of iterations based on the convergence of the factor the analyst is interested in the calculation. Reliability analysis will be performed for the diversion tunnel in Rrëshen, Albania, by using both methods mentioned and results will be confronted

  9. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically

  10. Continuous energy Monte Carlo analysis of neutron shielding benchmark experiments with cross sections in JENDL-3

    Ueki, Kohtaro; Ohashi, Atsuto (Ship Research Inst., Mitaka, Tokyo (Japan)); Kawai, Masayoshi

    1993-04-01

    The iron, carbon and beryllium cross sections in JENDL-3 have been tested by the continuous energy Monte Carlo analysis of the neutron shielding benchmark experiments. The iron cross sections have been tested with analysis of the ORNL and the Winfrith experiments using the fission neutron sources, and also the LLNL iron experiment using the D-T neutron source. The carbon and beryllium cross sections have been tested with the JAERI-FNS TOF experiments using the D-T neutron source. Revision of the subroutine TALLYD and an appropriate weight-window-parameter assignment have been accomplished in the MCNP code. In consequence, the FSD for each energy bin is reduced so small that the Monte Carlo results for neutron energy spectra could be recognized to be reliable. The Monte Carlo calculations with JENDL-3 indicate a good agreement with the benchmark experiments in a wide energy range, as a whole. Particularly, for the Winfrith iron experiment, the results with JENDL-3 give better agreement, just below the iron 24keV window, than that with ENDF/B-IV. For the JAERI-FNS TOF graphite experiment, the calculated angular fluxes with JENDL-3 give closer agreement than that with ENDF/B-IV at several peaks and dips caused by the inelastic scattering. However, distinct underestimation is observed in the calculated energy spectrum with JENDL-3 between 0.8 and 3.0 MeV for the two iron experiments using fission neutron sources. (author).

  11. Perturbation analysis for Monte Carlo continuous cross section models

    Sensitivity analysis, including both its forward and adjoint applications, collectively referred to hereinafter as Perturbation Analysis (PA), is an essential tool to complete Uncertainty Quantification (UQ) and Data Assimilation (DA). PA-assisted UQ and DA have traditionally been carried out for reactor analysis problems using deterministic as opposed to stochastic models for radiation transport. This is because PA requires many model executions to quantify how variations in input data, primarily cross sections, affect variations in model's responses, e.g. detectors readings, flux distribution, multiplication factor, etc. Although stochastic models are often sought for their higher accuracy, their repeated execution is at best computationally expensive and in reality intractable for typical reactor analysis problems involving many input data and output responses. Deterministic methods however achieve computational efficiency needed to carry out the PA analysis by reducing problem dimensionality via various spatial and energy homogenization assumptions. This however introduces modeling error components into the PA results which propagate to the following UQ and DA analyses. The introduced errors are problem specific and therefore are expected to limit the applicability of UQ and DA analyses to reactor systems that satisfy the introduced assumptions. This manuscript introduces a new method to complete PA employing a continuous cross section stochastic model and performed in a computationally efficient manner. If successful, the modeling error components introduced by deterministic methods could be eliminated, thereby allowing for wider applicability of DA and UQ results. Two MCNP models demonstrate the application of the new method - a Critical Pu Sphere (Jezebel), a Pu Fast Metal Array (Russian BR-1). The PA is completed for reaction rate densities, reaction rate ratios, and the multiplication factor. (author)

  12. SU-C-201-06: Utility of Quantitative 3D SPECT/CT Imaging in Patient Specific Internal Dosimetry of 153-Samarium with GATE Monte Carlo Package

    Fallahpoor, M; Abbasi, M [Tehran University of Medical Sciences, Vali-Asr Hospital, Tehran, Tehran (Iran, Islamic Republic of); Sen, A [University of Houston, Houston, TX (United States); Parach, A [Shahid Sadoughi University of Medical Sciences, Yazd, Yazd (Iran, Islamic Republic of); Kalantari, F [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-T scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning

  13. SU-C-201-06: Utility of Quantitative 3D SPECT/CT Imaging in Patient Specific Internal Dosimetry of 153-Samarium with GATE Monte Carlo Package

    Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-T scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning

  14. Successful vectorization - reactor physics Monte Carlo code

    Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)

  15. A Monte Carlo based spent fuel analysis safeguards strategy assessment

    Fensin, Michael L [Los Alamos National Laboratory; Tobin, Stephen J [Los Alamos National Laboratory; Swinhoe, Martyn T [Los Alamos National Laboratory; Menlove, Howard O [Los Alamos National Laboratory; Sandoval, Nathan P [Los Alamos National Laboratory

    2009-01-01

    assessment process, the techniques employed to automate the coupled facets of the assessment process, and the standard burnup/enrichment/cooling time dependent spent fuel assembly library. We also clearly define the diversion scenarios that will be analyzed during the standardized assessments. Though this study is currently limited to generic PWR assemblies, it is expected that the results of the assessment will yield an adequate spent fuel analysis strategy knowledge that will help the down-select process for other reactor types.

  16. Monte Carlo Neutronics and Thermal Hydraulics Analysis of Reactor Cores with Multilevel Grids

    Bernnat, W.; Mattes, M.; Guilliard, N.; Lapins, J.; Zwermann, W.; Pasichnyk, I.; Velkov, K.

    2014-06-01

    Power reactors are composed of assemblies with fuel pin lattices or other repeated structures with several grid levels, which can be modeled in detail by Monte Carlo neutronics codes such as MCNP6 using corresponding lattice options, even for large cores. Except for fresh cores at beginning of life, there is a varying material distribution due to burnup in the different fuel pins. Additionally, for power states the fuel and moderator temperatures and moderator densities vary according to the power distribution and cooling conditions. Therefore, a coupling of the neutronics code with a thermal hydraulics code is necessary. Depending on the level of detail of the analysis, a very large number of cells with different materials and temperatures must be regarded. The assignment of different material properties to all elements of a multilevel grid is very elaborate and may exceed program limits if the standard input procedure is used. Therefore, an internal assignment is used which overrides uniform input parameters. The temperature dependency of continuous energy cross sections, probability tables for the unresolved resonance region and thermal neutron scattering laws is taken into account by interpolation, requiring only a limited number of data sets generated for different temperatures. The method is applied with MCNP6 and proven for several full core reactor models. For the coupling of MCNP6 with thermal hydraulics appropriate interfaces were developed for the GRS system code ATHLET for liquid coolant and the IKE thermal hydraulics code ATTICA-3D for gaseous coolant. Examples will be shown for different applications for PWRs with square and hexagonal lattices, fast reactors (SFR) with hexagonal lattices and HTRs with pebble bed and prismatic lattices.

  17. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3DC/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  18. Analysis of the tritium breeding ratio benchmark experiments using the Monte Carlo code TRIPOLI-4

    Tritium breeding is an essential element of fusion nuclear technology. A tritium breeding ratio greater than unity is necessary for self-sufficient fueling. To simulate the 14 MeV neutron transport in tritium breeding systems from the D-T fusion reaction, the 3D realistic modeling with Monte Carlo code and the point-wise nuclear data are recommended. Continuous-energy TRIPOLI-4 Monte Carlo transport code has been widely used on the radiation shielding, criticality safety, and fission reactor physics. For supporting the ITER TBM (test blanket module) neutronics study with TRIPOLI-4 code, this paper presents the TRIPOLI-4 simulation of TBR (tritium breeding ratio) for six OKTAVIAN spherical assemblies of Osaka University: Li, Li-C, Pb-Li, Pb-Li-C, Be-Li, and Be-Li-C. It also investigates the impact of nuclear data libraries on TBR calculations from ENDF/B-VI.4, ENDF/B-VII.0, JEFF-3.1, JENDL-3.3, and FENDL-2.1. In general, TRIPOLI-4 produced satisfactory C/E values. Only beryllium of JEFF-3.1 library introduces higher uncertainties.

  19. A vectorized Monte Carlo method with pseudo-scattering for neutron transport analysis

    A vectorized Monte Carlo method has been developed for the neutron transport analysis on the vector supercomputer HITAC S810. In this method, a multi-particle tracking algorithm is adopted and fundamental processing such as pseudo-random number generation is modified to use the vector processor effectively. The flight analysis of this method is characterized by the new algorithm with pseudo-scattering. This algorithm was verified by comparing its results with those of the conventional one. The method realized a speed-up of factor 10; about 7 times by vectorization and 1.5 times by the new algorithm for flight analysis

  20. 3D Direct Simulation Monte Carlo Modelling of the Inner Gas Coma of Comet 67P/Churyumov-Gerasimenko: A Parameter Study

    Liao, Y.; Su, C. C.; Marschall, R.; Wu, J. S.; Rubin, M.; Lai, I. L.; Ip, W. H.; Keller, H. U.; Knollenberg, J.; Kührt, E.; Skorov, Y. V.; Thomas, N.

    2016-03-01

    Direct Simulation Monte Carlo (DSMC) is a powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow. However, the investigation of the parameter space in simulations can be time consuming since 3D DSMC is computationally highly intensive. For the target of ESA's Rosetta mission, comet 67P/Churyumov-Gerasimenko, we have identified to what extent modification of several parameters influence the 3D flow and gas temperature fields and have attempted to establish the reliability of inferences about the initial conditions from in situ and remote sensing measurements. A large number of DSMC runs have been completed with varying input parameters. In this work, we present the simulation results and conclude on the sensitivity of solutions to certain inputs. It is found that among cases of water outgassing, the surface production rate distribution is the most influential variable to the flow field.

  1. Monte-Carlo Analysis of the Flavour Changing Neutral Current B \\to Gamma at Babar

    Smith, D. [Imperial College, London (United Kingdom)

    2001-09-01

    The main theme of this thesis is a Monte-Carlo analysis of the rare Flavour Changing Neutral Current (FCNC) decay b→sγ. The analysis develops techniques that could be applied to real data, to discriminate between signal and background events in order to make a measurement of the branching ratio of this rare decay using the BaBar detector. Also included in this thesis is a description of the BaBar detector and the work I have undertaken in the development of the electronic data acquisition system for the Electromagnetic calorimeter (EMC), a subsystem of the BaBar detector.

  2. First Monte Carlo analysis of fragmentation functions from single-inclusive $e^+ e^-$ annihilation

    Sato, N; Melnitchouk, W; Hirai, M; Kumano, S; Accardi, A

    2016-01-01

    We perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.

  3. Analysis of communication costs for domain decomposed Monte Carlo methods in nuclear reactor analysis

    A domain decomposed Monte Carlo communication kernel is used to carry out performance tests to establish the feasibility of using Monte Carlo techniques for practical Light Water Reactor (LWR) core analyses. The results of the prototype code are interpreted in the context of simplified performance models which elucidate key scaling regimes of the parallel algorithm.

  4. MKENO-DAR: a direct angular representation Monte Carlo code for criticality safety analysis

    Improving the Monte Carlo code MULTI-KENO, the MKENO-DAR (Direct Angular Representation) code has been developed for criticality safety analysis in detail. A function was added to MULTI-KENO for representing anisotropic scattering strictly. With this function, the scattering angle of neutron is determined not by the average scattering angle μ-bar of the Pl Legendre polynomial but by the random work operation using probability distribution function produced with the higher order Legendre polynomials. This code is avilable for the FACOM-M380 computer. This report is a computer code manual for MKENO-DAR. (author)

  5. FTREE. Single-history Monte Carlo analysis for radiation detection and measurement

    This work introduces FTREE, which describes radiation cascades following impingement of a source particle on matter. The ensuing radiation field is characterised interaction by interaction, accounting for each generation of secondaries recursively. Each progeny is uniquely differentiated and catalogued into a family tree; the kinship is identified without ambiguity. This mode of observation, analysis and presentation goes beyond present-day detector technologies, beyond conventional Monte Carlo simulations and beyond standard pedagogy. It is able to observe rare events far out in the Gaussian tail which would have been lost in averaging-events less probable, but no less correct in physics. (author)

  6. Microlens assembly error analysis for light field camera based on Monte Carlo method

    Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping

    2016-08-01

    This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.

  7. Markov chain Monte Carlo linkage analysis of a complex qualitative phenotype.

    Hinrichs, A; Lin, J H; Reich, T; Bierut, L; Suarez, B K

    1999-01-01

    We tested a new computer program, LOKI, that implements a reversible jump Markov chain Monte Carlo (MCMC) technique for segregation and linkage analysis. Our objective was to determine whether this software, designed for use with continuously distributed phenotypes, has any efficacy when applied to the discrete disease states of the simulated data from the Mordor data from GAW Problem 1. Although we were able to identify the genomic location for two of the three quantitative trait loci by repeated application of the software, the MCMC sampler experienced significant mixing problems indicating that the method, as currently formulated in LOKI, was not suitable for the discrete phenotypes in this data set. PMID:10597502

  8. Comparison between Monte Carlo simulation and measurement with a 3D polymer gel dosimeter for dose distributions in biological samples

    In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning. (paper)

  9. Grain size distribution and topology in 3D grain growth simulation with large-scale Monte Carlo method

    Hao Wang; Guo-quan Liu; Xiang-ge Qin

    2009-01-01

    Three-dimensional normal grain growth was appropriately simulated using a Potts model Monte Carlo algorithm.The quasi-stationary grain size distribution obtained from simulation agreed well with the experimental result of pure iron.The Weibull function with a parameter β=2.77 and the Yu-Liu function with a parameter v =2.71 fit the quasi-stationary grain size distribution well.The grain volume distribution is a function that decreased exponentially with increasing grain volume.The distribution of boundary area of grains has a peak at S/=0.5,where S is the boundary area of a grain and is the mean boundary area of all grains in the system.The lognormal function fits the face number distribution well and the peak of the face number distribution is f=10.The mean radius of f=faced grains is not proportional to the face number,but appears to be related by a curve convex upward.In the 2D cross-section,both the perimeter law and the Aboav-Weaire law are observed to hold.

  10. Comparison between Monte Carlo simulation and measurement with a 3D polymer gel dosimeter for dose distributions in biological samples

    Furuta, T.; Maeyama, T.; Ishikawa, K. L.; Fukunishi, N.; Fukasaku, K.; Takagi, S.; Noda, S.; Himeno, R.; Hayashi, S.

    2015-08-01

    In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.

  11. 3D Monte Carlo particle-in-cell simulations of critical ionization velocity experiments in the ionosphere

    Proper interpretation of space based critical velocity ionization experiments depends upon understanding the expected results from in-situ or remote sensors. A three-dimensional electromagnetic Particle-in-Cell code with Monte Carlo charged particle-neutral collisions has been developed to model CIV interactions in typical neutral gas release experiments. In the model, the released neutral gas is taken to be a spherical cloud traveling with a constant density and velocity rvec υn across the geomagnetic field rvec B0. Then dynamics of the plasma ionized from the neutral cloud are studied, and the induced instabilities are discussed. The simulations show that the newly ionized plasma evolves to form an ''asymmetric sphere-sheet tail'' structure: the ions mainly drift with the neutral cloud and expand in the rvec υ x rvec B0 direction; the electrons are trapped by the magnetic field and form a curved ''sheet-like'' tail which spreads along the rvec B0 direction. The ionization rate determines the structure shape. Significant ion density enhancement occurs only in the core region of the neutral gas cloud. It is shown that the detection of CIV in an ionospheric gas release experiment critically depends on the sensor location

  12. Uncertainty Assessment of the Core Thermal-Hydraulic Analysis Using the Monte Carlo Method

    Choi, Sun Rock; Yoo, Jae Woon; Hwang, Dae Hyun; Kim, Sang Ji [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-10-15

    In the core thermal-hydraulic design of a sodium cooled fast reactor, the uncertainty factor analysis is a critical issue in order to assure safe and reliable operation. The deviations from the nominal values need to be quantitatively considered by statistical thermal design methods. The hot channel factors (HCF) were employed to evaluate the uncertainty in the early design such as the CRBRP. The improved thermal design procedure (ISTP) calculates the overall uncertainty based on the Root Sum Square technique and sensitivity analyses of each design parameters. Another way to consider the uncertainties is to use the Monte Carlo method (MCM). In this method, all the input uncertainties are randomly sampled according to their probability density functions and the resulting distribution for the output quantity is analyzed. It is able to directly estimate the uncertainty effects and propagation characteristics for the present thermalhydraulic model. However, it requires a huge computation time to get a reliable result because the accuracy is dependent on the sampling size. In this paper, the analysis of uncertainty factors using the Monte Carlo method is described. As a benchmark model, the ORNL 19 pin test is employed to validate the current uncertainty analysis method. The thermal-hydraulic calculation is conducted using the MATRA-LMR program which was developed at KAERI based on the subchannel approach. The results are compared with those of the hot channel factors and the improved thermal design procedure

  13. Monte Carlo analysis of Very High Temperature gas-cooled Reactor for hydrogen production

    This work has been pursued during 2 years. In the first year, the development of Monte Carlo analysis method for pebble-type VHTR core was focused with zero-power reactor. The pebble-bed cores of HTR-PROTEUS critical facility in Switzerland were selected for the benchmark model and detailed full-scope MCNP modeling was carried out. Especially, accurate and effective modeling of UO2 particles and their distributions in fuel pebble was pursed as well as the pebbles distribution within core region. After the detailed MCNP modeling of the whole facility, analyses of nuclear characteristics were carried out, and the results were compared with experiments and those of other research groups. The effective multiplication factors (keff) were calculated for the two HTR-PROTEUS cores, and then homogenization effect of TRISO fuel on criticality investigated. Control rod and shutdown rod worths were also calculated, and the criticality calculations with different cross-section library and various reflector thickness were carried out. In the 2nd year of the research period, the Monte Carol analysis method developed in the 1st year was applied to the core with thermal power. The pebble-bed cores of HTR-10 test reactor in China were selected for the benchmark model. After the detailed full-scope MCNP modeling the Monte Carlo analysis results calculated in this work were verified with the benchmark results which have been done for first criticality state and initial core

  14. 3D-personalized Monte Carlo dosimetry in 90Y-microspheres therapies of primary and secondary hepatic cancers: absorbed dose and biological effective dose considerations

    Full text of publication follows. Purpose: a 3D-Personalized Monte Carlo Dosimetry (PMCD) was developed for treatment planning in nuclear medicine. The method was applied to Selective Internal Radiation Therapy (SIRT) using 90Y-microspheres for unresectable hepatic cancers. Methods: The PMCD method was evaluated for 20 patients treated for hepatic metastases or hepatocellular carcinoma at the European Hospital Georges Pompidou (Paris). First, regions of interest were outlined on the patient CT images. Using the OEDIPE software, patient-specific voxel phantoms were created. 99mTc-MAA SPECT data were then used to generate 3D-matrices of cumulated activity. Absorbed doses and Biologically Effective Dose (BED) were calculated at the voxel scale using the MCNPX Monte Carlo transport code. Finally, OEDIPE was used to determine the maximum injectable activity (MIA) for tolerance criteria on organs at risk (OARs), i.e. the lungs and non tumoral liver (NTL). Tolerance criteria based on mean absorbed doses, mean BED, Dose-Volume Histograms (DVHs) or BED-Volume Histograms (BVHs) were considered. Those MIAs were compared to the Partition Model with tolerance criteria on mean absorbed doses, which is a conventional method applied in clinical practice. Results: compared to Partition Model recommendations, performing dosimetry using the PMCD method enables to increase the activity prescription while ensuring OARs' radiation protection. Moreover, tolerance criteria based on DVHs allow us to enhance treatment planning efficiency by taking advantage of the parallel characteristic of the liver and the lungs, whose functions are not impaired if the level of irradiation to a fraction of the organ is kept sufficiently low. Finally, multi-cycle treatments based on tolerance criteria on mean BED and BVHs, were considered to go further in the dose optimization, taking into account biological considerations such as cell repair or radiosensitivity. Conclusion: besides its feasibility

  15. Use of Monte Carlo simulations for cultural heritage X-ray fluorescence analysis

    Brunetti, Antonio, E-mail: brunetti@uniss.it [Polcoming Department, University of Sassari (Italy); Golosio, Bruno [Polcoming Department, University of Sassari (Italy); Schoonjans, Tom; Oliva, Piernicola [Chemical and Pharmaceutical Department, University of Sassari (Italy)

    2015-06-01

    The analytical study of Cultural Heritage objects often requires merely a qualitative determination of composition and manufacturing technology. However, sometimes a qualitative estimate is not sufficient, for example when dealing with multilayered metallic objects. Under such circumstances a quantitative estimate of the chemical contents of each layer is sometimes required in order to determine the technology that was used to produce the object. A quantitative analysis is often complicated by the surface state: roughness, corrosion, incrustations that remain even after restoration, due to efforts to preserve the patina. Furthermore, restorers will often add a protective layer on the surface. In all these cases standard quantitative methods such as the fundamental parameter based approaches are generally not applicable. An alternative approach is presented based on the use of Monte Carlo simulations for quantitative estimation. - Highlights: • We present an application of fast Monte Carlo codes for Cultural Heritage artifact analysis. • We show applications to complex multilayer structures. • The methods allow estimating both the composition and the thickness of multilayer, such as bronze with patina. • The performance in terms of accuracy and uncertainty is described for the bronze samples.

  16. Speciation model selection by Monte Carlo analysis of optical absorption spectra: Plutonium(IV) nitrate complexes

    Standard modeling approaches can produce the most likely values of the formation constants of metal-ligand complexes if a particular set of species containing the metal ion is known or assumed to exist in solution equilibrium with complexing ligands. Identifying the most likely set of species when more than one set is plausible is a more difficult problem to address quantitatively. A Monte Carlo method of data analysis is described that measures the relative abilities of different speciation models to fit optical spectra of open-shell actinide ions. The best model(s) can be identified from among a larger group of models initially judged to be plausible. The method is demonstrated by analyzing the absorption spectra of aqueous Pu(IV) titrated with nitrate ion at constant 2 molal ionic strength in aqueous perchloric acid. The best speciation model supported by the data is shown to include three Pu(IV) species with nitrate coordination numbers 0, 1, and 2. Formation constants are β1=3.2±0.5 and β2=11.2±1.2, where the uncertainties are 95% confidence limits estimated by propagating raw data uncertainties using Monte Carlo methods. Principal component analysis independently indicates three Pu(IV) complexes in equilibrium. (c) 2000 Society for Applied Spectroscopy

  17. Use of Monte Carlo simulations for cultural heritage X-ray fluorescence analysis

    The analytical study of Cultural Heritage objects often requires merely a qualitative determination of composition and manufacturing technology. However, sometimes a qualitative estimate is not sufficient, for example when dealing with multilayered metallic objects. Under such circumstances a quantitative estimate of the chemical contents of each layer is sometimes required in order to determine the technology that was used to produce the object. A quantitative analysis is often complicated by the surface state: roughness, corrosion, incrustations that remain even after restoration, due to efforts to preserve the patina. Furthermore, restorers will often add a protective layer on the surface. In all these cases standard quantitative methods such as the fundamental parameter based approaches are generally not applicable. An alternative approach is presented based on the use of Monte Carlo simulations for quantitative estimation. - Highlights: • We present an application of fast Monte Carlo codes for Cultural Heritage artifact analysis. • We show applications to complex multilayer structures. • The methods allow estimating both the composition and the thickness of multilayer, such as bronze with patina. • The performance in terms of accuracy and uncertainty is described for the bronze samples

  18. Neutronic Analysis of the 3 MW TRIGA MARK II Research Reactor, Part I: Monte Carlo Simulation

    This study deals with the neutronic analysis of the current core configuration of a 3 MW TRIGA MARK II research reactor at Atomic Energy Research Establishment (AERE), Savar, Dhaka, Bangladesh and validation of the results by benchmarking with the experimental, operational and available Final Safety Analysis Report (FSAR) values. The three-dimensional continuous-energy Monte Carlo code MCNP4C was used to develop a versatile and accurate full-core model of the TRIGA core. The model represents in detail all components of the core with literally no physical approximation. All fresh fuel and control elements as well as the vicinity of the core were precisely described. Continuous energy cross-section data from ENDF/B-VI and S(α, β) scattering functions from the ENDF/B-V library were used. The validation of the model against benchmark experimental results is presented. The MCNP predictions and the experimentally determined values are found to be in very good agreement, which indicates that the Monte Carlo model is correctly simulating the TRIGA reactor. (author)

  19. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Pratama, Cecep, E-mail: great.pratama@gmail.com [Graduate Program of Earth Science, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Meilano, Irwan [Geodesy Research Division, Faculty of Earth Science and Technology, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia); Nugraha, Andri Dian [Global Geophysical Group, Faculty of Mining and Petroleum Engineering, ITB, JalanGanesa no. 10, Bandung 40132 (Indonesia)

    2015-04-24

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%.

  20. Development of a component Monte Carlo program for accident sequence analysis to apply for reprocessing facility

    In consideration of application for reprocessing facility, where a variety of causal events such as equipment failure and human error might occur, and the event progression would take place with relatively substantial time delay before getting to the accident stage, a component Monte Carlo program for accident sequence analysis has been developed to pursue chronologically the probabilistic behavior of each component failure and repair in an exact manner. In comparison with analytical formulation and its calculated results, this Monte Carlo technique is shown to predict a reasonable result. Then, taking an example for a sample problem from a German reprocessing facility model, an accident sequence of red-oil explosion in a plutonium evaporator is analyzed to give a comprehensive interpretation about statistic variation range and computer time elapsed for random walk history calculations. Furthermore, to discuss about its applicability for the practical case of plant system with complex component constitution, a possibility of drastic speed-up of computation is shown by parallelization of the computer program. (author)

  1. Monte Carlo simulation for slip rate sensitivity analysis in Cimandiri fault area

    Slip rate is used to estimate earthquake recurrence relationship which is the most influence for hazard level. We examine slip rate contribution of Peak Ground Acceleration (PGA), in probabilistic seismic hazard maps (10% probability of exceedance in 50 years or 500 years return period). Hazard curve of PGA have been investigated for Sukabumi using a PSHA (Probabilistic Seismic Hazard Analysis). We observe that the most influence in the hazard estimate is crustal fault. Monte Carlo approach has been developed to assess the sensitivity. Then, Monte Carlo simulations properties have been assessed. Uncertainty and coefficient of variation from slip rate for Cimandiri Fault area has been calculated. We observe that seismic hazard estimates is sensitive to fault slip rate with seismic hazard uncertainty result about 0.25 g. For specific site, we found seismic hazard estimate for Sukabumi is between 0.4904 – 0.8465 g with uncertainty between 0.0847 – 0.2389 g and COV between 17.7% – 29.8%

  2. Monte Carlo analysis of direct measurements of the fission neutron yield per absorption by 233U and 235U of monochromatic neutrons

    Monte Carlo analysis of the measurements of Smith et al. of the number of fission neutrons produced per neutron absorbed, eta, for 2200 m/sec neutrons absorbed by 233U and 235U yields: eta2200233 = 2.2993 +- 0.0082 and eta2200235 = 2.0777 +- 0.0064. The standard deviations include Monte Carlo, cross section, and experimental uncertainties. The Monte Carlo analysis was confirmed by calculating measured quantities used by the experimentalists in determining eta2200

  3. AIRTRANS, Time-Dependent, Energy Dependent 3-D Neutron Transport, Gamma Transport in Air by Monte-Carlo

    1 - Nature of physical problem solved: The function of the AIRTRANS system is to calculate by Monte Carlo methods the radiation field produced by neutron and/or gamma-ray sources which are located in the atmosphere. The radiation field is expressed as the time - and energy-dependent flux at a maximum of 50 point detectors in the atmosphere. The system calculates un-collided fluxes analytically and collided fluxes by the 'once-more collided' flux-at-a-point technique. Energy-dependent response functions can be applied to the fluxes to obtain desired flux functionals, such as doses, at the detector point. AIRTRANS also can be employed to generate sources of secondary gamma radiation. 2 - Method of solution - Neutron interactions treated in the calculational scheme include elastic (isotropic and anisotropic) scattering, inelastic (discrete level and continuum) scattering, and absorption. Charged particle reactions, e.g, (n,p) are treated as absorptions. A built-in kernel option can be employed to take neutrons from the 150 keV to thermal energy, thus eliminating the need for particle tracking in this energy range. Another option used in conjunction with the neutron transport problem creates an 'interaction tape' which describes all the collision events that can lead to the production of secondary gamma-rays. This interaction tape subsequently can be used to generate a source of secondary gamma rays. The gamma-ray interactions considered include Compton scattering, pair production, and the photoelectric effect; the latter two processes are treated as absorption events. Incorporated in the system is an option to use a simple importance sampling technique for detectors that are many mean free paths from the source. In essence, particles which fly far from the source are split into fragments, the degree of fragmentation being proportional to the penetration distance from the source. Each fragment is tracked separately, thus increasing the percentage of computer time spent

  4. BOT3P: a mesh generation software package for transport analysis with deterministic and Monte Carlo codes

    BOT3P consists of a set of standard Fortran 77 language programs that gives the users of the deterministic transport codes DORT, TORT, TWODANT, THREEDANT, PARTISN and the sensitivity code SUSD3D some useful diagnostic tools to prepare and check the geometry of their input data files for both Cartesian and cylindrical geometries, including graphical display modules. Users can produce the geometrical and material distribution data for all the cited codes for both two-dimensional and three-dimensional applications and, only in 3-dimensional Cartesian geometry, for the Monte Carlo Transport Code MCNP, starting from the same BOT3P input. Moreover, BOT3P stores the fine mesh arrays and the material zone map in a binary file, the content of which can be easily interfaced to any deterministic and Monte Carlo transport code. This makes it possible to compare directly for the same geometry the effects stemming from the use of different data libraries and solution approaches on transport analysis results. BOT3P Version 5.0 lets users optionally and with the desired precision compute the area/volume error of material zones with respect to the theoretical values, if any, because of the stair-cased representation of the geometry, and automatically update material densities on the whole zone domains to conserve masses. A local (per mesh) density correction approach is also available. BOT3P is designed to run on Linux/UNIX platforms and is publicly available from the Organization for Economic Cooperation and Development (OECD/NEA)/Nuclear Energy Agency Data Bank. Through the use of BOT3P, radiation transport problems with complex 3-dimensional geometrical structures can be modelled easily, as a relatively small amount of engineer-time is required and refinement is achieved by changing few parameters. This tool is useful for solving very large challenging problems, as successfully demonstrated not only in some complex neutron shielding and criticality benchmarks but also in a power

  5. Benchmarking of the 3-D CAD-based Discrete Ordinates code “ATTILA” for dose rate calculations against experiments and Monte Carlo calculations

    Shutdown dose rate (SDDR) inside and around the diagnostics ports of ITER is performed at PPPL/UCLA using the 3-D, FEM, Discrete Ordinates code, ATTILA, along with its updated FORNAX transmutation/decay gamma library. Other ITER partners assess SDDR using codes based on the Monte Carlo (MC) approach (e.g. MCNP code) for transport calculation and the radioactivity inventory code FISPACT or other equivalent decay data libraries for dose rate assessment. To reveal the range of discrepancies in the results obtained by various analysts, an extensive experimental and calculation benchmarking effort has been undertaken to validate the capability of ATTILA for dose rate assessment. On the experimental validation front, the comparison was performed using the measured data from two SDDR experiments performed at the FNG facility, Italy. Comparison was made to the experimental data and to MC results obtained by other analysts. On the calculation validation front, the ATTILA's predictions were compared to other results at key locations inside a calculation benchmark whose configuration duplicates an upper diagnostics port plug (UPP) in ITER. Both serial and parallel version of ATTILA-7.1.0 are used in the PPPL/UCLA analysis performed with FENDL-2.1/FORNAX databases. In the FNG 1st experimental, it was shown that ATTILA's dose rates are largely over estimated (by ∼30–60%) with the ANSI/ANS-6.1.1 flux-to-dose factors whereas the ICRP-74 factors give better agreement (10–20%) with the experimental data and with the MC results at all cooling times. In the 2nd experiment, there is an under estimation in SDDR calculated by both MCNP and ATTILA based on ANSI/ANS-6.1.1 for cooling times up to ∼4 days after irradiation. Thereafter, an over estimation is observed (∼5–10% with MCNP and ∼10–15% with ATTILA). As for the calculation benchmark, the agreement is much better based on ICRP-74 1996 data. The divergence among all dose rate results at ∼11 days cooling time is no

  6. Monte Carlo depletion analysis of a PWR integral fuel burnable absorber by MCNAP

    The MCNAP is a personal computer-based continuous energy Monte Carlo (MC) neutronics analysis program written on C++ language. For the purpose of examining its qualification, a comparison of the depletion analysis of three integral burnable fuel assemblies of the pressurized water reactor(PWR) by the MCNAP and deterministic fuel assembly(FA) design vendor codes is presented. It is demonstrated that the continuous energy MC calculation by the MCNAP can provide a very accurate neutronics analysis method for the burnable absorber FA's. It is also demonstrated that the parallel MC computation by adoption of multiple PC's enables one to complete the lifetime depletion analysis of the FA's within the order of hours instead of order of days otherwise. (orig.)

  7. Data uncertainty analysis for safety assessment of HLW disposal by the Monte Carlo simulation

    Based on the conceptual model of the Reference Case, which is defined as the baseline for various cases in the safety assessment of the H12 report, a new probabilistic simulation code that allowed rapid evaluation of the effect of data uncertainty has been developed. Using this code, probabilistic simulation was performed by the Monte Carlo method and conservativeness and sufficiency of the safety assessment in the H12 report was confirmed, which was performed deterministically. In order to examine the important parameter, this study includes the analysis of sensitivity structure among the input and the output. Cluster analysis and multiple regression analysis for each cluster were applied in this analysis. As a result, the transmissivity had a strong influence on the uncertainty of the system performance. Furthermore, this approach was confirmed to evaluate the global sensitive parameters and local sensitive parameters that strongly influence the space of the partial simulation results. (author)

  8. The timing resolution of scintillation-detector systems: Monte Carlo analysis

    Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a two-threshold or three-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time and

  9. The timing resolution of scintillation-detector systems: Monte Carlo analysis.

    Choong, Woon-Seng

    2009-11-01

    Recent advancements in fast scintillating materials and fast photomultiplier tubes (PMTs) have stimulated renewed interest in time-of-flight (TOF) positron emission tomography (PET). It is well known that the improvement in the timing resolution in PET can significantly reduce the noise variance in the reconstructed image resulting in improved image quality. In order to evaluate the timing performance of scintillation detectors used in TOF PET, we use Monte Carlo analysis to model the physical processes (crystal geometry, crystal surface finish, scintillator rise time, scintillator decay time, photoelectron yield, PMT transit time spread, PMT single-electron response, amplifier response and time pick-off method) that can contribute to the timing resolution of scintillation-detector systems. In the Monte Carlo analysis, the photoelectron emissions are modeled by a rate function, which is used to generate the photoelectron time points. The rate function, which is simulated using Geant4, represents the combined intrinsic light emissions of the scintillator and the subsequent light transport through the crystal. The PMT output signal is determined by the superposition of the PMT single-electron response resulting from the photoelectron emissions. The transit time spread and the single-electron gain variation of the PMT are modeled in the analysis. Three practical time pick-off methods are considered in the analysis. Statistically, the best timing resolution is achieved with the first photoelectron timing. The calculated timing resolution suggests that a leading edge discriminator gives better timing performance than a constant fraction discriminator and produces comparable results when a two-threshold or three-threshold discriminator is used. For a typical PMT, the effect of detector noise on the timing resolution is negligible. The calculated timing resolution is found to improve with increasing mean photoelectron yield, decreasing scintillator decay time and

  10. Validation of Atucha-2 PHWR helios and Relap5-3D model by Monte Carlo cell and core calculations - 335

    Within the framework of the Second Agreement 'Nucleoelectrica Argentina-SA - University of Pisa', a complex three dimensional (3D) neutron kinetics (NK) coupled thermal-hydraulic (TH) RELAP5-3D model of the Atucha 2 PHWR has been developed and validated. Homogenized cross section database was produced by the lattice physics code HELIOS. In order to increase the level of confidence on the results of such sophisticated models, an independent Monte Carlo code model, based on the MONTEBURNS package (MCNP5 + ORIGEN), has been set up. The scope of this activity is to obtain a systematic check of the deterministic codes results. This necessity is particularly felt in the case of Atucha-2 reactor modeling, since its own peculiarities (e.g., oblique Control Rods, Positive Void Coefficient) and since, if approved by the Argentinean Safety Authority, the RELAP53D 3D NK TH model will constitute the first application of a neutronic thermal-hydraulics coupled code techniques to a reactor licensing project. (authors)

  11. Current status of safety analysis code MARS and uncertainty quantification by Monte-Carlo method

    MARS (Multi-dimensional Analysis of Reactor Safety) code has been developed since 1997 for a realistic multi-dimensional thermal-hydraulic system analysis of light water reactor transients. The backbones of MARS are the RELAP5/MOD3.2.1.2 and COBRA-TF codes of USNRC. These two codes were consolidated into a single code by integrating the hydrodynamic solution schemes. New multidimensional TH model has been developed and extended to enable integrated coupled TH analysis through code coupling technique, DLL. The motivation for uncertainty quantification of MARS is considered twofold, 1) to provide “best estimate plus uncertainty” analysis for licensing of commercial power reactor with realistic margins, and 2) to provide support to design and/or validation related analysis for research and production reactors. An assessment of the current LBLOCA uncertainty analysis methodology has been done using data from an integral thermal-hydraulic experiment LOFT L2-5. Monte Carlo calculation has been performed and compared with the tolerance level determined by Wilks formula. The calculation has been done within reasonable CPU time on PC cluster system. Monte-Carlo exercise shows that the 95% upper limit value can be obtained well with 95% confidence level by Wilks formula, although we have to endure 5% risk of PCT under-prediction. The result also shows the statistical fluctuation of limit value using Wilks 1st order is as large as PCT uncertainty itself. The main conclusion is that it is desirable to increase the order of Wilks formula to be higher than the second order to get the reliable safety margin of current design feature. (author)

  12. Refined Monte Carlo analysis of the H.B. Robinson-2 reactor pressure vessel dosimetry benchmark

    Highlights: → Activation of in- and ex-vessel radiometric dosimeters is studied with MCNPX. → Influences of neutron source definition and cross-section libraries are examined. → 237Np(n,f) energy cut-off is set at 10 eV to cover the reaction completely. → Different methods for deriving activities from reaction rates are compared. → Uncertainties are evaluated and are below 10%, final C/E ratios being within 15%. - Abstract: Refined analysis, based on use of the Monte Carlo code MCNPX-2.4.0, is presented for the 'H.B. Robinson-2 pressure vessel dosimetry benchmark', which is a part of the Radiation Shielding and Dosimetry Experiments Database (SINBAD). First, the performance of the Monte Carlo methodology is reassessed relative to the reported deterministic results obtained with DORT. Thereby, the analysis is accompanied by a quantitative evaluation of the optimal energy cut-off value for each of the in- and ex-vessel dosimeters that were employed. Second, a more realistic definition of the neutron source is implemented than proposed in the benchmark. Thus, the current procedure for power-to-neutron-source-strength conversion, as also for explicitly considering the burnup-dependent fuel assembly-wise average fission neutron spectrum, is found to affect the calculated values significantly. In addition to the modelling refinements made, different approaches are tested for deriving the dosimeter activities, such that the neutron source time-evolution and the activity decay can be taken into account more accurately. Finally, in order to achieve a certain assessment of uncertainties, several sensitivity studies are carried out, e.g. with respect to the nuclear data used for the dosimeters, as also to the assumed physical location of the dosimeters. In spite of some apparent degradation in the prediction of experimental results when refining the Monte Carlo modelling, the final calculation/experiment (C/E) ratios for the measured dosimeter activities remain

  13. Noninvasive spectral imaging of skin chromophores based on multiple regression analysis aided by Monte Carlo simulation

    Nishidate, Izumi; Wiswadarma, Aditya; Hase, Yota; Tanaka, Noriyuki; Maeda, Takaaki; Niizeki, Kyuichi; Aizu, Yoshihisa

    2011-08-01

    In order to visualize melanin and blood concentrations and oxygen saturation in human skin tissue, a simple imaging technique based on multispectral diffuse reflectance images acquired at six wavelengths (500, 520, 540, 560, 580 and 600nm) was developed. The technique utilizes multiple regression analysis aided by Monte Carlo simulation for diffuse reflectance spectra. Using the absorbance spectrum as a response variable and the extinction coefficients of melanin, oxygenated hemoglobin, and deoxygenated hemoglobin as predictor variables, multiple regression analysis provides regression coefficients. Concentrations of melanin and total blood are then determined from the regression coefficients using conversion vectors that are deduced numerically in advance, while oxygen saturation is obtained directly from the regression coefficients. Experiments with a tissue-like agar gel phantom validated the method. In vivo experiments with human skin of the human hand during upper limb occlusion and of the inner forearm exposed to UV irradiation demonstrated the ability of the method to evaluate physiological reactions of human skin tissue.

  14. A bottom collider vertex detector design, Monte-Carlo simulation and analysis package

    A detailed simulation of the BCD vertex detector is underway. Specifications and global design issues are briefly reviewed. The BCD design based on double sided strip detector is described in more detail. The GEANT3-based Monte-Carlo program and the analysis package used to estimate detector performance are discussed in detail. The current status of the expected resolution and signal to noise ratio for the ''golden'' CP violating mode Bd → π+π- is presented. These calculations have been done at FNAL energy (√s = 2.0 TeV). Emphasis is placed on design issues, analysis techniques and related software rather than physics potentials. 20 refs., 46 figs

  15. Core-scale solute transport model selection using Monte Carlo analysis

    Malama, Bwalya; James, Scott C

    2013-01-01

    Model applicability to core-scale solute transport is evaluated using breakthrough data from column experiments conducted with conservative tracers tritium (H-3) and sodium-22, and the retarding solute uranium-232. The three models considered are single-porosity, double-porosity with single-rate mobile-immobile mass-exchange, and the multirate model, which is a deterministic model that admits the statistics of a random mobile-immobile mass-exchange rate coefficient. The experiments were conducted on intact Culebra Dolomite core samples. Previously, data were analyzed using single- and double-porosity models although the Culebra Dolomite is known to possess multiple types and scales of porosity, and to exhibit multirate mobile-immobile-domain mass transfer characteristics at field scale. The data are reanalyzed here and null-space Monte Carlo analysis is used to facilitate objective model selection. Prediction (or residual) bias is adopted as a measure of the model structural error. The analysis clearly shows ...

  16. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  17. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-01

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  18. Quantum Monte Carlo for Noncovalent Interactions: Analysis of Protocols and Simplified Scheme Attaining Benchmark Accuracy

    Dubecký, Matúš; Jurečka, Petr; Mitas, Lubos; Hobza, Pavel; Otyepka, Michal

    2014-01-01

    Reliable theoretical predictions of noncovalent interaction energies, which are important e.g. in drug-design and hydrogen-storage applications, belong to longstanding challenges of contemporary quantum chemistry. In this respect, the fixed-node diffusion Monte Carlo (FN-DMC) is a promising alternative to the commonly used ``gold standard'' coupled-cluster CCSD(T)/CBS method for its benchmark accuracy and favourable scaling, in contrast to other correlated wave function approaches. This work is focused on the analysis of protocols and possible tradeoffs for FN-DMC estimations of noncovalent interaction energies and proposes a significantly more efficient yet accurate computational protocol using simplified explicit correlation terms. Its performance is illustrated on a number of weakly bound complexes, including water dimer, benzene/hydrogen, T-shape benzene dimer and stacked adenine-thymine DNA base pair complex. The proposed protocol achieves excellent agreement ($\\sim$0.2 kcal/mol) with respect to the reli...

  19. 2D Monte Carlo analysis of radiological risk assessment for the food intake in Korea

    Most public health risk assessments assume and combine a series of average, conservative and worst-case values to derive an acceptable point estimate of risk. To improve quality of risk information, insight of uncertainty in the assessments is needed and more emphasis is put on the probabilistic risk assessment. Probabilistic risk assessment studies use probability distributions for one or more variables of the risk equation in order to quantitatively characterize variability and uncertainty. In this study, an advanced technique called the two-dimensional Monte Carlo analysis (2D MCA) is applied to estimation of internal doses from intake of radionuclides in foodstuffs and drinking water in Korea. The variables of the risk model along with the parameters of these variables are described in terms of probability density functions (PDFs). In addition, sensitivity analyses were performed to identify important factors to the radiation doses. (author)

  20. Predictive uncertainty analysis of a saltwater intrusion model using null-space Monte Carlo

    Herckenrath, Daan; Langevin, Christian D.; Doherty, John

    2011-01-01

    . Random noise was added to the observations to approximate realistic field conditions. The NSMC method was used to calculate 1000 calibration-constrained parameter fields. If the dimensionality of the solution space was set appropriately, the estimated uncertainty range from the NSMC analysis encompassed......Because of the extensive computational burden and perhaps a lack of awareness of existing methods, rigorous uncertainty analyses are rarely conducted for variable-density flow and transport models. For this reason, a recently developed null-space Monte Carlo (NSMC) method for quantifying prediction...... the truth. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. Reducing the dimensionality of the null-space for the processing of the random parameter sets did not result in any significant gains in efficiency and compromised the ability...

  1. Ligand-receptor binding kinetics in surface plasmon resonance cells: A Monte Carlo analysis

    Carroll, Jacob; Forsten-Williams, Kimberly; Täuber, Uwe C

    2016-01-01

    Surface plasmon resonance (SPR) chips are widely used to measure association and dissociation rates for the binding kinetics between two species of chemicals, e.g., cell receptors and ligands. It is commonly assumed that ligands are spatially well mixed in the SPR region, and hence a mean-field rate equation description is appropriate. This approximation however ignores the spatial fluctuations as well as temporal correlations induced by multiple local rebinding events, which become prominent for slow diffusion rates and high binding affinities. We report detailed Monte Carlo simulations of ligand binding kinetics in an SPR cell subject to laminar flow. We extract the binding and dissociation rates by means of the techniques frequently employed in experimental analysis that are motivated by the mean-field approximation. We find major discrepancies in a wide parameter regime between the thus extracted rates and the known input simulation values. These results underscore the crucial quantitative importance of s...

  2. Outlier detection in near-infrared spectroscopic analysis by using Monte Carlo cross-validation

    2008-01-01

    An outlier detection method is proposed for near-infrared spectral analysis. The underlying philosophy of the method is that,in random test(Monte Carlo) cross-validation,the probability of outliers presenting in good models with smaller prediction residual error sum of squares(PRESS) or in bad models with larger PRESS should be obviously different from normal samples. The method builds a large number of PLS models by using random test cross-validation at first,then the models are sorted by the PRESS,and at last the outliers are recognized according to the accumulative probability of each sample in the sorted models. For validation of the proposed method,four data sets,including three published data sets and a large data set of tobacco lamina,were investigated. The proposed method was proved to be highly efficient and veracious compared with the conventional leave-one-out(LOO) cross validation method.

  3. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms.

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  4. Monte Carlo analysis of doppler reactivity coefficient for UO2 pin cell geometry

    Monte Carlo analysis has been performed to investigate the impact of the exact resonance elastic scattering model on the Doppler reactivity coefficient for the UO2 pin cell geometry with the parabolic temperature profile. As a result, the exact scattering model affects the coefficient similarly for both the flat and parabolic temperature profiles; it increases the contribution of uranium-238 resonance capture in the energy region from ∼16 eV to ∼150 eV and does uniformly in the radial direction. Then the following conclusions hold for both the exact and asymptotic resonance scattering models. The Doppler reactivity coefficient is well reproduced with the definition of the effective fuel temperature (equivalent flat temperature) proposed by Grandi et al. In addition, the effective fuel temperature volume-averaged over the entire fuel region negatively overestimates the reference Doppler reactivity coefficient but the calculated one can be significantly improved by dividing the fuel region into a few equi-volumes. (author)

  5. New approach to spectrum analysis. Iterative Monte Carlo simulations and fitting

    A novel spectrum analysis code which combines the Monte Carlo simulations with spectrum fitting is introduced. The shapes used in the fitting are obtained from the simulations. The code is developed especially to analyze complex alpha particle energy spectra - such as those obtained from non-processed air filters, swipe samples or isolated particles emitting alpha radiation. In addition to activities of the nuclides present in the sample, the code can provide source characterization. In particular, the code can be used to characterize samples of nuclear material, i.e. those containing fissionable isotopes such as 235U or 239Pu. In the present paper we illustrate the use of the code to identify and quantify alpha-particle emitting isotopes in a depleted U projectile found in Kosovo. (author)

  6. Contrast to Noise Ratio and Contrast Detail Analysis in Mammography:A Monte Carlo Study

    Metaxas, V.; Delis, H.; Kalogeropoulou, C.; Zampakis, P.; Panayiotakis, G.

    2015-09-01

    The mammographic spectrum is one of the major factors affecting image quality in mammography. In this study, a Monte Carlo (MC) simulation model was used to evaluate image quality characteristics of various mammographic spectra. The anode/filter combinations evaluated, were those traditionally used in mammography, for tube voltages between 26 and 30 kVp. The imaging performance was investigated in terms of Contrast to Noise Ratio (CNR) and Contrast Detail (CD) analysis, by involving human observers, utilizing a mathematical CD phantom. Soft spectra provided the best characteristics in terms of both CNR and CD scores, while tube voltage had a limited effect. W-anode spectra filtered with k-edge filters demonstrated an improved performance, that sometimes was better compared to softer x-ray spectra, produced by Mo or Rh anode. Regarding the filter material, k-edge filters showed superior performance compared to Al filters.

  7. Marathon: An Open Source Software Library for the Analysis of Markov-Chain Monte Carlo Algorithms

    Rechner, Steffen; Berger, Annabell

    2016-01-01

    We present the software library marathon, which is designed to support the analysis of sampling algorithms that are based on the Markov-Chain Monte Carlo principle. The main application of this library is the computation of properties of so-called state graphs, which represent the structure of Markov chains. We demonstrate applications and the usefulness of marathon by investigating the quality of several bounding methods on four well-known Markov chains for sampling perfect matchings and bipartite graphs. In a set of experiments, we compute the total mixing time and several of its bounds for a large number of input instances. We find that the upper bound gained by the famous canonical path method is often several magnitudes larger than the total mixing time and deteriorates with growing input size. In contrast, the spectral bound is found to be a precise approximation of the total mixing time. PMID:26824442

  8. Heat-Flux Analysis of Solar Furnace Using the Monte Carlo Ray-Tracing Method

    An understanding of the concentrated solar flux is critical for the analysis and design of solar-energy-utilization systems. The current work focuses on the development of an algorithm that uses the Monte Carlo ray-tracing method with excellent flexibility and expandability; this method considers both solar limb darkening and the surface slope error of reflectors, thereby analyzing the solar flux. A comparison of the modeling results with measurements at the solar furnace in Korea Institute of Energy Research (KIER) show good agreement within a measurement uncertainty of 10%. The model evaluates the concentration performance of the KIER solar furnace with a tracking accuracy of 2 mrad and a maximum attainable concentration ratio of 4400 sun. Flux variations according to measurement position and flux distributions depending on acceptance angles provide detailed information for the design of chemical reactors or secondary concentrators

  9. Enrichment effects on CANDU-SEU spent fuel Monte Carlo shielding analysis

    Shielding analyses are an essential component of the nuclear safety, the estimations of radiation doses in order to reduce them under specified limitation values being the main task here. According to IAEA data, more than 10 millions packages containing radioactive materials are annually transported world wide. All the problems arisen from the safe radioactive materials transport assurance must be carefully settled. Last decade, both for operating reactors and future reactor projects, a general trend to raise the discharge fuel burnup has been recorded world wide. For CANDU type reactors, the most attractive solution seems to be SEU and RU fuels utilization. The basic tasks accomplished by the shielding calculations in a nuclear safety analysis consist in dose rates calculation, to prevent any risks both for personnel protection and impact on the environment during the spent fuel manipulation, transport and storage. The paper aims to study the effects induced by fuel enrichment variation on CANDU-SEU spent fuel photon dose rates for a Monte Carlo shielding analysis applied to spent fuel transport after a defined cooling period in the NPP pools. The fuel bundles projects considered here have 43 Zircaloy rods, filled with SEU fuel pellets, the fuel having different enrichment in U-235. All the geometrical and material data related on the cask were considered according to the shipping cask type B model. After a photon source profile calculation by using ORIGEN-S code, in order to perform the shielding calculations, Monte Carlo MORSE-SGC code has been used, both codes being included in the ORNL's SCALE 5 system. The photon dose rates to the shipping cask wall and in air, at different distances from the cask, have been estimated. Finally, a photon dose rates comparison for different fuel enrichments has been performed. (author)

  10. On the feasibility of a homogenised multi-group Monte Carlo method in reactor analysis

    The use of homogenised multi-group cross sections to speed up Monte Carlo calculation has been studied to some extent, but the method is not widely implemented in modern calculation codes. This paper presents a calculation scheme in which homogenised material parameters are generated using the PSG continuous-energy Monte Carlo reactor physics code and used by MORA, a new full-core Monte Carlo code entirely based on homogenisation. The theory of homogenisation and its implementation in the Monte Carlo method are briefly introduced. The PSG-MORA calculation scheme is put to practice in two fundamentally different test cases: a small sodium-cooled fast reactor (JOYO) and a large PWR core. It is shown that the homogenisation results in a dramatic increase in efficiency. The results are in a reasonably good agreement with reference PSG and MCNP5 calculations, although fission source convergence becomes a problem in the PWR test case. (authors)

  11. Experience with Monte Carlo variance reduction using adjoint solutions in HYPER neutronics analysis

    The variance reduction techniques using adjoint solutions are applied to the Monte Carlo calculation of the HYPER(HYbrid Power Extraction Reactor) core neutronics. The applied variance reduction techniques are the geometry splitting and the weight windows. The weight bounds and the cell importance needed for these techniques are generated from an adjoint discrete ordinate calculation by the two-dimensional TWODANT code. The flux distribution variances of the Monte Carlo calculations by these variance reduction techniques are compared with the results of the standard Monte Carlo calculations. It is shown that the variance reduction techniques using adjoint solutions to the HYPER core neutronics result in a decrease in the efficiency of the Monte Carlo calculation

  12. Analysis of some splitting and roulette algorithms in shield calculations by the Monte Carlo method

    Different schemes of using the splitting and roulette methods in calculation of radiation transport in nuclear facility shields by the Monte Carlo method are considered. Efficiency of the considered schemes is estimated on the example of test calculations

  13. The effect of load imbalances on the performance of Monte Carlo algorithms in LWR analysis

    A model is developed to predict the impact of particle load imbalances on the performance of domain-decomposed Monte Carlo neutron transport algorithms. Expressions for upper bound performance “penalties” are derived in terms of simple machine characteristics, material characterizations and initial particle distributions. The hope is that these relations can be used to evaluate tradeoffs among different memory decomposition strategies in next generation Monte Carlo codes, and perhaps as a metric for triggering particle redistribution in production codes

  14. Study of the quantitative analysis approach of maintenance by the Monte Carlo simulation method

    This study is examination of the quantitative valuation by Monte Carlo simulation method of maintenance activities of a nuclear power plant. Therefore, the concept of the quantitative valuation of maintenance that examination was advanced in the Japan Society of Maintenology and International Institute of Universality (IUU) was arranged. Basis examination for quantitative valuation of maintenance was carried out at simple feed water system, by Monte Carlo simulation method. (author)

  15. Analysis of possibility to apply new mathematical methods (R-function theory) in Monte Carlo simulation of complex geometry

    This analysis is part of the report on ' Implementation of geometry module of 05R code in another Monte Carlo code', chapter 6.0: establishment of future activity related to geometry in Monte Carlo method. The introduction points out some problems in solving complex three-dimensional models which induce the need for developing more efficient geometry modules in Monte Carlo calculations. Second part include formulation of the problem and geometry module. Two fundamental questions to be solved are defined: (1) for a given point, it is necessary to determine material region or boundary where it belongs, and (2) for a given direction, all cross section points with material regions should be determined. Third part deals with possible connection with Monte Carlo calculations for computer simulation of geometry objects. R-function theory enables creation of geometry module base on the same logic (complex regions are constructed by elementary regions sets operations) as well as construction geometry codes. R-functions can efficiently replace functions of three-value logic in all significant models. They are even more appropriate for application since three-value logic is not typical for digital computers which operate in two-value logic. This shows that there is a need for work in this field. It is shown that there is a possibility to develop interactive code for computer modeling of geometry objects in parallel with development of geometry module

  16. Three-dimensional polarized Monte Carlo atmospheric radiative transfer model (3DMCPOL): 3D effects on polarized visible reflectances of a cirrus cloud

    A polarized atmospheric radiative transfer model for the computation of radiative transfer inside three-dimensional inhomogeneous mediums is described. This code is based on Monte Carlo methods and takes into account the polarization state of the light. Specificities introduced by such consideration are presented. After validation of the model by comparisons with adding-doubling computations, examples of reflectances simulated from a synthetic inhomogeneous cirrus cloud are analyzed and compared with reflectances obtained with the classical assumption of a plane parallel homogeneous cloud (1D approximation). As polarized reflectance is known to saturate for optical thickness of about 3, one could think that they should be less sensitive to 3D effects than total reflectances. However, at high spatial resolution (80 m), values of polarized reflectances much higher than the ones predicted by the 1D theory can be reached. The study of the reflectances of a step cloud shows that these large values are the results of illumination and shadowing effects similar to those often observed on total reflectances. In addition, we show that for larger spatial resolution (10 km), the so-called plane-parallel bias leads to a non-negligible overestimation of the polarized reflectances of about 7-8%.

  17. An Evaluation of the Adjoint Flux Using the Collision Probability Method for the Hybrid Monte Carlo Radiation Shielding Analysis

    It is noted that the analog Monte Carlo method has low calculation efficiency at deep penetration problems such as radiation shielding analysis. In order to increase the calculation efficiency, variance reduction techniques have been introduced and applied for the shielding calculation. To optimize the variance reduction technique, the hybrid Monte Carlo method was introduced. For the determination of the parameters using the hybrid Monte Carlo method, the adjoint flux should be calculated by the deterministic methods. In this study, the collision probability method is applied to calculate adjoint flux. The solution of integration transport equation in the collision probability method is modified to calculate the adjoint flux approximately even for complex and arbitrary geometries. For the calculation, C++ program is developed. By using the calculated adjoint flux, importance parameters of each cell in shielding material are determined and used for variance reduction of transport calculation. In order to evaluate calculation efficiency with the proposed method, shielding calculations are performed with MCNPX 2.7. In this study, a method to calculate the adjoint flux in using the Monte Carlo variance reduction was proposed to improve Monte Carlo calculation efficiency of thick shielding problem. The importance parameter for each cell of shielding material is determined by calculating adjoint flux with the modified collision probability method. In order to calculate adjoint flux with the proposed method, C++ program is developed. The results show that the proposed method can efficiently increase the FOM of transport calculation. It is expected that the proposed method can be utilize for the calculation efficiency in thick shielding calculation

  18. Analysis of polytype stability in PVT grown silicon carbide single crystal using competitive lattice model Monte Carlo simulations

    Guo, Hui-Jun; Huang, Wei; Liu, Xi; Gao, Pan; Zhuo, Shi-Yi; Xin, Jun; Yan, Cheng-Feng; Zheng, Yan-Qing; Yang, Jian-Hua; Shi, Er-Wei

    2014-09-01

    Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.

  19. Analysis of polytype stability in PVT grown silicon carbide single crystal using competitive lattice model Monte Carlo simulations

    Hui-Jun Guo

    2014-09-01

    Full Text Available Polytype stability is very important for high quality SiC single crystal growth. However, the growth conditions for the 4H, 6H and 15R polytypes are similar, and the mechanism of polytype stability is not clear. The kinetics aspects, such as surface-step nucleation, are important. The kinetic Monte Carlo method is a common tool to study surface kinetics in crystal growth. However, the present lattice models for kinetic Monte Carlo simulations cannot solve the problem of the competitive growth of two or more lattice structures. In this study, a competitive lattice model was developed for kinetic Monte Carlo simulation of the competition growth of the 4H and 6H polytypes of SiC. The site positions are fixed at the perfect crystal lattice positions without any adjustment of the site positions. Surface steps on seeds and large ratios of diffusion/deposition have positive effects on the 4H polytype stability. The 3D polytype distribution in a physical vapor transport method grown SiC ingot showed that the facet preserved the 4H polytype even if the 6H polytype dominated the growth surface. The theoretical and experimental results of polytype growth in SiC suggest that retaining the step growth mode is an important factor to maintain a stable single 4H polytype during SiC growth.

  20. Derivation of landslide-triggering thresholds by Monte Carlo simulation and ROC analysis

    Peres, David Johnny; Cancelliere, Antonino

    2015-04-01

    Rainfall thresholds of landslide-triggering are useful in early warning systems to be implemented in prone areas. Direct statistical analysis of historical records of rainfall and landslide data presents different shortcomings typically due to incompleteness of landslide historical archives, imprecise knowledge of the triggering instants, unavailability of a rain gauge located near the landslides, etc. In this work, a Monte Carlo approach to derive and evaluate landslide triggering thresholds is presented. Such an approach contributes to overcome some of the above mentioned shortcomings of direct empirical analysis of observed data. The proposed Monte Carlo framework consists in the combination of a rainfall stochastic model with hydrological and slope-stability model. Specifically, 1000-years long hourly synthetic rainfall and related slope stability factor of safety data are generated by coupling the Neyman-Scott rectangular pulses model with the TRIGRS unsaturated model (Baum et al., 2008) and a linear-reservoir water table recession model. Triggering and non-triggering rainfall events are then distinguished and analyzed to derive stochastic-input physically based thresholds that optimize the trade-off between correct and wrong predictions. For this purpose, receiver operating characteristic (ROC) indices are used. An application of the method to the highly landslide-prone area of the Peloritani mountains in north-eastern Sicily (Italy) is carried out. A threshold for the area is derived and successfully validated by comparison with thresholds proposed by other researchers. Moreover, the uncertainty in threshold derivation due to variability of rainfall intensity within events and to antecedent rainfall is investigated. Results indicate that variability of intensity during rainfall events influences significantly rainfall intensity and duration associated with landslide triggering. A representation of rainfall as constant-intensity hyetographs globally leads to

  1. Improving Markov Chain Monte Carlo algorithms in LISA Pathfinder Data Analysis

    The LISA Pathfinder mission (LPF) aims to test key technologies for the future LISA mission. The LISA Technology Package (LTP) on-board LPF will consist of an exhaustive suite of experiments and its outcome will be crucial for the future detection of gravitational waves. In order to achieve maximum sensitivity, we need to have an understanding of every instrument on-board and parametrize the properties of the underlying noise models. The Data Analysis team has developed algorithms for parameter estimation of the system. A very promising one implemented for LISA Pathfinder data analysis is the Markov Chain Monte Carlo. A series of experiments are going to take place during flight operations and each experiment is going to provide us with essential information for the next in the sequence. Therefore, it is a priority to optimize and improve our tools available for data analysis during the mission. Using a Bayesian framework analysis allows us to apply prior knowledge for each experiment, which means that we can efficiently use our prior estimates for the parameters, making the method more accurate and significantly faster. This, together with other algorithm improvements, will lead us to our main goal, which is no other than creating a robust and reliable tool for parameter estimation during the LPF mission.

  2. Coupled neutronic thermo-hydraulic analysis of full PWR core with Monte-Carlo based BGCore system

    Highlights: → New thermal-hydraulic (TH) feedback module was integrated into the MCNP based depletion system BGCore. → A coupled neutronic-TH analysis of a full PWR core was performed with the upgraded BGCore system. → The BGCore results were verified against those of 3D nodal diffusion code DYN3D. → Very good agreement in major core operational parameters between the BGCore and DYN3D results was observed. - Abstract: BGCore reactor analysis system was recently developed at Ben-Gurion University for calculating in-core fuel composition and spent fuel emissions following discharge. It couples the Monte Carlo transport code MCNP with an independently developed burnup and decay module SARAF. Most of the existing MCNP based depletion codes (e.g. MOCUP, Monteburns, MCODE) tally directly the one-group fluxes and reaction rates in order to prepare one-group cross sections necessary for the fuel depletion analysis. BGCore, on the other hand, uses a multi-group (MG) approach for generation of one group cross-sections. This coupling approach significantly reduces the code execution time without compromising the accuracy of the results. Substantial reduction in the BGCore code execution time allows consideration of problems with much higher degree of complexity, such as introduction of thermal hydraulic (TH) feedback into the calculation scheme. Recently, a simplified TH feedback module, THERMO, was developed and integrated into the BGCore system. To demonstrate the capabilities of the upgraded BGCore system, a coupled neutronic TH analysis of a full PWR core was performed. The BGCore results were compared with those of the state of the art 3D deterministic nodal diffusion code DYN3D. Very good agreement in major core operational parameters including k-eff eigenvalue, axial and radial power profiles, and temperature distributions between the BGCore and DYN3D results was observed. This agreement confirms the consistency of the implementation of the TH feedback module

  3. Monte Carlo transport calculations and analysis for reactor pressure vessel neutron fluence

    The application of Monte Carlo methods for reactor pressure vessel (RPV) neutron fluence calculations is examined. As many commercial nuclear light water reactors approach the end of their design lifetime, it is of great consequence that reactor operators and regulators be able to characterize the structural integrity of the RPV accurately for financial reasons, as well as safety reasons, due to the possibility of plant life extensions. The Monte Carlo method, which offers explicit three-dimensional geometric representation and continuous energy and angular simulation, is well suited for this task. A model of the Three Mile Island unit 1 reactor is presented for determination of RPV fluence; Monte Carlo (MCNP) and deterministic (DORT) results are compared for this application; and numerous issues related to performing these calculations are examined. Synthesized three-dimensional deterministic models are observed to produce results that are comparable to those of Monte Carlo methods, provided the two methods utilize the same cross-section libraries. Continuous energy Monte Carlo methods are shown to predict more (15 to 20%) high-energy neutrons in the RPV than deterministic methods

  4. Statistical Modification Analysis of Helical Planetary Gears based on Response Surface Method and Monte Carlo Simulation

    ZHANG Jun; GUO Fan

    2015-01-01

    Tooth modification technique is widely used in gear industry to improve the meshing performance of gearings. However, few of the present studies on tooth modification considers the influence of inevitable random errors on gear modification effects. In order to investigate the uncertainties of tooth modification amount variations on system’s dynamic behaviors of a helical planetary gears, an analytical dynamic model including tooth modification parameters is proposed to carry out a deterministic analysis on the dynamics of a helical planetary gear. The dynamic meshing forces as well as the dynamic transmission errors of the sun-planet 1 gear pair with and without tooth modifications are computed and compared to show the effectiveness of tooth modifications on gear dynamics enhancement. By using response surface method, a fitted regression model for the dynamic transmission error(DTE) fluctuations is established to quantify the relationship between modification amounts and DTE fluctuations. By shifting the inevitable random errors arousing from manufacturing and installing process to tooth modification amount variations, a statistical tooth modification model is developed and a methodology combining Monte Carlo simulation and response surface method is presented for uncertainty analysis of tooth modifications. The uncertainly analysis reveals that the system’s dynamic behaviors do not obey the normal distribution rule even though the design variables are normally distributed. In addition, a deterministic modification amount will not definitely achieve an optimal result for both static and dynamic transmission error fluctuation reduction simultaneously.

  5. Application of the Monte Carlo thermal design analysis to evaluate uncertainties of the PWR core using the THALES subchannel code

    In order to maintain the safety of the reactor core, the minimum DNBR (Departure from Nucleate Boiling Ratio) in the PWR (Pressurized-Water Reactor) core remains higher than the DNBR limit during Condition I and II events. Therefore, it is important to adequately evaluate the thermal performance of the PWR core. To realistically evaluate the relationship among the uncertainties and reduce the conservatism resulting from the unknown phenomena, the Monte Carlo method is being used in many areas requiring the statistical approach. Especially, the Monte Carlo method is drawing attention as the method for the evaluation of the thermal performance of the PWR core. For the best estimate evaluation of the uncertainties in the PWR core, KEPCO Nuclear Fuel (hereinafter KEPCO NF) has been developing the thermal design analysis based on the Monte Carlo method. For the Monte Carlo thermal design analysis, various studies are conducted as follows. To generate the Gaussian random numbers, Gaussian random number generators are investigated. In this paper, Box-Muller, Polar, GRAND, and Ziggurat method are briefly reviewed. The random numbers are generated on the basis of the nominal value and uncertainty of the parameter. If the normal distribution is acceptable at 5% significance level through the normality tests, the random numbers are used for the Monte Carlo thermal design analysis. Using the subchannel code THALES (Thermal Hydraulic AnaLyzer for Enhanced Simulation of core) developed by KEPCO NF, the subchannel analyses are carried out considering the core operating parameters randomized, and then DNBR distribution is derived. Finally, if the DNBR distribution is statistically combined with the uncertainties of the other parameters, the DNBRT distribution can be obtained. From the DNBRT distribution, the DNBR limit is determined to avoid DNB (Departure from Nucleate Boiling) at a 95% probability at a 95% confidence level. Through the example calculation, it is verified that

  6. Numerical experiment on variance biases and Monte Carlo neutronics analysis with thermal hydraulic feedback

    Monte Carlo (MC) power method based on the fixed number of fission sites at the beginning of each cycle is known to cause biases in the variances of the k-eigenvalue (keff) and the fission reaction rate estimates. Because of the biases, the apparent variances of keff and the fission reaction rate estimates from a single MC run tend to be smaller or larger than the real variances of the corresponding quantities, depending on the degree of the inter-generational correlation of the sample. We demonstrate this through a numerical experiment involving 100 independent MC runs for the neutronics analysis of a 17 x 17 fuel assembly of a pressurized water reactor (PWR). We also demonstrate through the numerical experiment that Gelbard and Prael's batch method and Ueki et al's covariance estimation method enable one to estimate the approximate real variances of keff and the fission reaction rate estimates from a single MC run. We then show that the use of the approximate real variances from the two-bias predicting methods instead of the apparent variances provides an efficient MC power iteration scheme that is required in the MC neutronics analysis of a real system to determine the pin power distribution consistent with the thermal hydraulic (TH) conditions of individual pins of the system. (authors)

  7. Use of Monte Carlo Bootstrap Method in the Analysis of Sample Sufficiency for Radioecological Data

    There are operational difficulties in obtaining samples for radioecological studies. Population data may no longer be available during the study and obtaining new samples may not be possible. These problems do the researcher sometimes work with a small number of data. Therefore, it is difficult to know whether the number of samples will be sufficient to estimate the desired parameter. Hence, it is critical do the analysis of sample sufficiency. It is not interesting uses the classical methods of statistic to analyze sample sufficiency in Radioecology, because naturally occurring radionuclides have a random distribution in soil, usually arise outliers and gaps with missing values. The present work was developed aiming to apply the Monte Carlo Bootstrap method in the analysis of sample sufficiency with quantitative estimation of a single variable such as specific activity of a natural radioisotope present in plants. The pseudo population was a small sample with 14 values of specific activity of 226Ra in forage palm (Opuntia spp.). Using the R software was performed a computational procedure to calculate the number of the sample values. The re sampling process with replacement took the 14 values of original sample and produced 10,000 bootstrap samples for each round. Then was calculated the estimated average θ for samples with 2, 5, 8, 11 and 14 values randomly selected. The results showed that if the researcher work with only 11 sample values, the average parameter will be within a confidence interval with 90% probability . (Author)

  8. Criticality qualification of a new Monte Carlo code for reactor core analysis

    In order to accurately simulate Accelerator Driven Systems (ADS), the utilization of at least two computational tools is necessary (the thermal-hydraulic problem is not considered in the frame of this work), namely: (a) A High Energy Physics (HEP) code system dealing with the 'Accelerator part' of the installation, i.e. the computation of the spectrum, intensity and spatial distribution of the neutrons source created by (p, n) reactions of a proton beam on a target and (b) a neutronics code system, handling the 'Reactor part' of the installation, i.e. criticality calculations, neutron transport, fuel burn-up and fission products evolution. In the present work, a single computational tool, aiming to analyze an ADS in its integrity and also able to perform core analysis for a conventional fission reactor, is proposed. The code is based on the well qualified HEP code GEANT (version 3), transformed to perform criticality calculations. The performance of the code is tested against two qualified neutronics code systems, the diffusion/transport SCALE-CITATION code system and the Monte Carlo TRIPOLI code, in the case of a research reactor core analysis. A satisfactory agreement was exhibited by the three codes.

  9. Monte Carlo shielding comparative analysis applied to TRIGA HEU and LEU spent fuel transport

    The paper is a comparative study of LEU (low uranium enrichment) and HEU (highly enriched uranium) fuel utilization effects for the shielding analysis during spent fuel transport. A comparison against the measured data for HEU spent fuel, available from the last stage of spent fuel repatriation fulfilled in the summer of 2008, is also presented. All geometrical and material data for the shipping cask were considered according to NAC-LWT Cask approved model. The shielding analysis estimates radiation doses to shipping cask wall surface, and in air at 1 m and 2 m, respectively, from the cask by means of 3-dimensional Monte Carlo MORSE-SGC code. Before loading into the shipping cask TRIGA spent fuel source terms and spent fuel parameters have been obtained by means of ORIGEN-S code. Both codes are included in ORNL's SCALE 5 programs package. 60Co radioactivity is important for HEU spent fuel; actinides contribution to total fuel radioactivity is low. For LEU spent fuel 60Co radioactivity is insignificant; actinides contribution to total fuel radioactivity is high. Dose rates for both HEU and LEU fuel contents are below regulatory limits, LEU spent fuel photon dose rates being greater than the HEU ones. The comparison between HEU spent fuel theoretical and measured dose rates in selected measuring points shows a good agreement, the calculated values being greater than the measured ones both to cask wall surface (about 34% relative difference) and in air at 1 m distance from the cask surface (about 15% relative difference). (authors)

  10. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation.

  11. Monte Carlo analysis of thermochromatography as a fast separation method for nuclear forensics

    Nuclear forensic science has become increasingly important for global nuclear security, and enhancing the timeliness of forensic analysis has been established as an important objective in the field. New, faster techniques must be developed to meet this objective. Current approaches for the analysis of minor actinides, fission products, and fuel-specific materials require time-consuming chemical separation coupled with measurement through either nuclear counting or mass spectrometry. These very sensitive measurement techniques can be hindered by impurities or incomplete separation in even the most painstaking chemical separations. High-temperature gas-phase separation or thermochromatography has been used in the past for the rapid separations in the study of newly created elements and as a basis for chemical classification of that element. This work examines the potential for rapid separation of gaseous species to be applied in nuclear forensic investigations. Monte Carlo modeling has been used to evaluate the potential utility of the thermochromatographic separation method, albeit this assessment is necessarily limited due to the lack of available experimental data for validation. (author)

  12. Criticality qualification of a new Monte Carlo code for reactor core analysis

    Catsaros, N. [Institute of Nuclear Technology - Radiation Protection, NCSR ' DEMOKRITOS' , P.O. Box 60228, 15310 Aghia Paraskevi (Greece); Gaveau, B. [MAPS, Universite Paris VI, 4 Place Jussieu, 75005 Paris (France); Jaekel, M. [Laboratoire de Physique Theorique, Ecole Normale Superieure, 24 rue Lhomond, 75231 Paris (France); Maillard, J. [MAPS, Universite Paris VI, 4 Place Jussieu, 75005 Paris (France); CNRS-IDRIS, Bt 506, BP167, 91403 Orsay (France); CNRS-IN2P3, 3 rue Michel Ange, 75794 Paris (France); Maurel, G. [Faculte de Medecine, Universite Paris VI, 27 rue de Chaligny, 75012 Paris (France); MAPS, Universite Paris VI, 4 Place Jussieu, 75005 Paris (France); Savva, P., E-mail: savvapan@ipta.demokritos.g [Institute of Nuclear Technology - Radiation Protection, NCSR ' DEMOKRITOS' , P.O. Box 60228, 15310 Aghia Paraskevi (Greece); Silva, J. [MAPS, Universite Paris VI, 4 Place Jussieu, 75005 Paris (France); Varvayanni, M.; Zisis, Th. [Institute of Nuclear Technology - Radiation Protection, NCSR ' DEMOKRITOS' , P.O. Box 60228, 15310 Aghia Paraskevi (Greece)

    2009-11-15

    In order to accurately simulate Accelerator Driven Systems (ADS), the utilization of at least two computational tools is necessary (the thermal-hydraulic problem is not considered in the frame of this work), namely: (a) A High Energy Physics (HEP) code system dealing with the 'Accelerator part' of the installation, i.e. the computation of the spectrum, intensity and spatial distribution of the neutrons source created by (p, n) reactions of a proton beam on a target and (b) a neutronics code system, handling the 'Reactor part' of the installation, i.e. criticality calculations, neutron transport, fuel burn-up and fission products evolution. In the present work, a single computational tool, aiming to analyze an ADS in its integrity and also able to perform core analysis for a conventional fission reactor, is proposed. The code is based on the well qualified HEP code GEANT (version 3), transformed to perform criticality calculations. The performance of the code is tested against two qualified neutronics code systems, the diffusion/transport SCALE-CITATION code system and the Monte Carlo TRIPOLI code, in the case of a research reactor core analysis. A satisfactory agreement was exhibited by the three codes.

  13. MULTI-KENO: a Monte Carlo code for criticality safety analysis

    Modifying the Monte Carlo code KENO-IV, the MULTI-KENO code was developed for criticality safety analysis. The following functions were added to the code; (1) to divide a system into many sub-systems named super boxes where the size of box types in each super box can be selected independently, (2) to output graphical view of a system for examining geometrical input data, (3) to solve fixed source problems, (4) to permit intersection of core boundaries and inner geometries, (5) to output ANISN type neutron balance table. With the above function (1), many cases which had to be applied a general geometry option of KENO-IV, became to be treated as box type geometry. In such a case, input data became simpler and required computer time became shorter than those of KENO-IV. This code is now available for the FACOM-M200 computer and the CDC 6600 computer. This report is a computer code manual for MULTI-KENO. (author)

  14. Markov chain Monte Carlo analysis to constrain dark matter properties with directional detection

    Directional detection is a promising dark matter search strategy. Indeed, weakly interacting massive particle (WIMP)-induced recoils would present a direction dependence toward the Cygnus constellation, while background-induced recoils exhibit an isotropic distribution in the Galactic rest frame. Taking advantage of these characteristic features, and even in the presence of a sizeable background, it has recently been shown that data from forthcoming directional detectors could lead either to a competitive exclusion or to a conclusive discovery, depending on the value of the WIMP-nucleon cross section. However, it is possible to further exploit these upcoming data by using the strong dependence of the WIMP signal with: the WIMP mass and the local WIMP velocity distribution. Using a Markov chain Monte Carlo analysis of recoil events, we show for the first time the possibility to constrain the unknown WIMP parameters, both from particle physics (mass and cross section) and Galactic halo (velocity dispersion along the three axis), leading to an identification of non-baryonic dark matter.

  15. Monte Carlo burnup analysis code development and application to an incore thermionic space nuclear power system

    In the design of the incore thermionic reactor system developed under the Advanced Thermionic Initiative (ATI), the fuel is highly enriched uranium dioxide and the moderating medium is zirconium hydride. The traditional burnup and fuel depletion analysis codes have been found to be inadequate for these calculations, largely because of the material and geometry modeled and because the neutron spectra assumed for the codes such as LEOPARD and ORIGEN do not even closely fit that for a small, thermal reactor using ZrH as moderator. More sophisticated codes such as the transport lattice type code WIMS often lack some materials, such as ZrH. Thus a new method which could accurately calculate the neutron spectrum and the appropriate reaction rates within the fuel element is needed. The method developed utilizes and interconnects the accuracy of the Monte Carlo Neutron/Photon (MCNP) method to calculate reaction rates for the important isotopes, and a time dependent depletion routine to calculate the temporal effects on isotope concentrations. This effort required the modification of MCNP itself to perform the additional task of accomplishing burnup calculations. The modified version called, MCNPBURN, evolved to be a general dual purpose code which can be used for standard calculations as well as for burn-up

  16. Monte Carlo analysis of the MEGA microlensing events towards M31

    Ingrosso, G; De Paolis, F; Jetzer, P; Nucita, A A; Strafella, F; Jetzer, Ph.

    2005-01-01

    We perform an analytical study and a Monte Carlo (MC) analysis of the main features for microlensing events in pixel lensing observations towards M31. Our main aim is to investigate the lens nature and location of the 14 candidate events found by the MEGA collaboration. Assuming a reference model for the mass distribution in M31 and the standard model for our galaxy, we estimate the MACHO-to-self lensing probability and the event time duration towards M31. Reproducing the MEGA observing conditions, as a result we get the MC event number density distribution as a function of the event full-width half-maximum duration $t_{1/2}$ and the magnitude at maximum $R_{\\mathrm {max}}$. For a MACHO mass of $0.5 M_{\\odot}$ we find typical values of $t_{1/2} \\simeq 20$ day and $R_{\\mathrm {max}} \\simeq 22$, for both MACHO-lensing and self-lensing events occurring beyond about 10 arcminutes from the M31 center. A comparison of the observed features ($t_{1/2}$ and $R_{\\mathrm {max}}$) with our MC results shows that for a MAC...

  17. Benchmark analysis of TRIGA mark II reactivity experiment using a continuous energy Monte Carlo code MCNP

    The benchmark analysis of reactivity experiments in the TRIGA-II core at the Musashi Institute of Technology Research Reactor (Musashi reactor; 100 kW) was performed by a three-dimensional continuous-energy Monte Carlo code MCNP4A. The reactivity worth and integral reactivity curves of the control rods as well as the reactivity worth distributions of fuel and graphite elements were used in the validation process of the physical model and neutron cross section data from the ENDF/B-V evaluation. The calculated values of integral reactivity curves of the control rods were in agreement with the experimental data obtained by the period method. The integral worth measured by the rod drop method was also consistent with the calculation. The calculated values of the fuel and the graphite element worth distributions were consistent with the measured ones within the statistical error estimates. These results showed that the exact core configuration including the control rod positions to reproduce the fission source distribution in the experiment must be introduced into the calculation core for obtaining the precise solution. It can be concluded that our simulation model of the TRIGA-II core is precise enough to reproduce the control rod worth, fuel and graphite elements reactivity worth distributions. (author)

  18. The use of Monte Carlo analysis for exposure assessment of an estuarine food web

    Iannuzzi, T.J.; Shear, N.M.; Harrington, N.W.; Henning, M.H. [McLaren/Hart Environmental Engineering Corp., Portland, ME (United States). ChemRisk Div.

    1995-12-31

    Despite apparent agreement within the scientific community that probabilistic methods of analysis offer substantially more informative exposure predictions than those offered by the traditional point estimate approach, few risk assessments conducted or approved by state and federal regulatory agencies have used probabilistic methods. Among the likely deterrents to application of probabilistic methods to ecological risk assessment is the absence of ``standard`` data distributions that are considered applicable to most conditions for a given ecological receptor. Indeed, point estimates of ecological exposure factor values for a limited number of wildlife receptors have only recently been published. The Monte Carlo method of probabilistic modeling has received increasing support as a promising technique for characterizing uncertainty and variation in estimates of exposure to environmental contaminants. An evaluation of literature on the behavior, physiology, and ecology of estuarine organisms was conducted in order to identify those variables that most strongly influence uptake of xenobiotic chemicals from sediments, water and food sources. The ranges, central tendencies, and distributions of several key parameter values for polychaetes (Nereis sp.), mummichog (Fundulus heteroclitus), blue crab (Callinectes sapidus), and striped bass (Morone saxatilis) in east coast estuaries were identified. Understanding the variation in such factors, which include feeding rate, growth rate, feeding range, excretion rate, respiration rate, body weight, lipid content, food assimilation efficiency, and chemical assimilation efficiency, is critical to the understanding the mechanisms that control the uptake of xenobiotic chemicals in aquatic organisms, and to the ability to estimate bioaccumulation from chemical exposures in the aquatic environment.

  19. A Monte Carlo/response surface strategy for sensitivity analysis: application to a dynamic model of vegetative plant growth

    Lim, J. T.; Gold, H. J.; Wilkerson, G. G.; Raper, C. D. Jr; Raper CD, J. r. (Principal Investigator)

    1989-01-01

    We describe the application of a strategy for conducting a sensitivity analysis for a complex dynamic model. The procedure involves preliminary screening of parameter sensitivities by numerical estimation of linear sensitivity coefficients, followed by generation of a response surface based on Monte Carlo simulation. Application is to a physiological model of the vegetative growth of soybean plants. The analysis provides insights as to the relative importance of certain physiological processes in controlling plant growth. Advantages and disadvantages of the strategy are discussed.

  20. Monte Carlo simulations of GeoPET experiments: 3D images of tracer distributions (18F, 124I and 58Co) in Opalinus clay, anhydrite and quartz

    Zakhnini, Abdelhamid; Kulenkampff, Johannes; Sauerzapf, Sophie; Pietrzyk, Uwe; Lippmann-Pipke, Johanna

    2013-08-01

    Understanding conservative fluid flow and reactive tracer transport in soils and rock formations requires quantitative transport visualization methods in 3D+t. After a decade of research and development we established the GeoPET as a non-destructive method with unrivalled sensitivity and selectivity, with due spatial and temporal resolution by applying Positron Emission Tomography (PET), a nuclear medicine imaging method, to dense rock material. Requirements for reaching the physical limit of image resolution of nearly 1 mm are (a) a high-resolution PET-camera, like our ClearPET scanner (Raytest), and (b) appropriate correction methods for scatter and attenuation of 511 keV—photons in the dense geological material. The latter are by far more significant in dense geological material than in human and small animal body tissue (water). Here we present data from Monte Carlo simulations (MCS) reflecting selected GeoPET experiments. The MCS consider all involved nuclear physical processes of the measurement with the ClearPET-system and allow us to quantify the sensitivity of the method and the scatter fractions in geological media as function of material (quartz, Opalinus clay and anhydrite compared to water), PET isotope (18F, 58Co and 124I), and geometric system parameters. The synthetic data sets obtained by MCS are the basis for detailed performance assessment studies allowing for image quality improvements. A scatter correction method is applied exemplarily by subtracting projections of simulated scattered coincidences from experimental data sets prior to image reconstruction with an iterative reconstruction process.

  1. Monte Carlo estimation of scatter effects on quantitative myocardial blood flow and perfusable tissue fraction using 3D-PET and 15O-water

    Hirano, Yoshiyuki; Koshino, Kazuhiro; Watabe, Hiroshi; Fukushima, Kazuhito; Iida, Hidehiro

    2012-11-01

    In clinical cardiac positron emission tomography using 15O-water, significant tracer accumulation is observed not only in the heart but also in the liver and lung, which are partially outside the field-of-view. In this work, we investigated the effects of scatter on quantitative myocardium blood flow (MBF) and perfusable tissue fraction (PTF) by a precise Monte Carlo simulation (Geant4) and a numerical human model. We assigned activities to the heart, liver, and lung of the human model with varying ratios of organ activities according to an experimental time activity curve and created dynamic sinograms. The sinogram data were reconstructed by filtered backprojection. By comparing a scatter-corrected image (SC) with a true image (TRUE), we evaluated the accuracy of the scatter correction. TRUE was reconstructed using a scatter-eliminated sinogram, which can be obtained only in simulations. A scatter-uncorrected image (W/O SC) and an attenuation-uncorrected image (W/O AC) were also constructed. Finally, we calculated MBF and PTF with a single tissue-compartment model for four types of images. As a result, scatter was corrected accurately, and MBFs derived from all types of images were consistent with the MBF obtained from TRUE. Meanwhile, the PTF of only the SC was in agreement with the PTF of TRUE. From the simulation results, we concluded that quantitative MBF is less affected by scatter and absorption in 3D-PET using 15O-water. However, scatter correction is essential for accurate PTF.

  2. TH-C-12A-08: New Compact 10 MV S-Band Linear Accelerator: 3D Finite-Element Design and Monte Carlo Dose Simulations

    Purpose: To design a new compact S-band linac waveguide capable of producing a 10 MV x-ray beam, while maintaining the length (27.5 cm) of current 6 MV waveguides. This will allow higher x-ray energies to be used in our linac-MRI systems with the same footprint. Methods: Finite element software COMSOL Multiphysics was used to design an accelerator cavity matching one published in an experiment breakdown study, to ensure that our modeled cavities do not exceed the threshold electric fields published. This cavity was used as the basis for designing an accelerator waveguide, where each cavity of the full waveguide was tuned to resonate at 2.997 GHz by adjusting the cavity diameter. The RF field solution within the waveguide was calculated, and together with an electron-gun phase space generated using Opera3D/SCALA, were input into electron tracking software PARMELA to compute the electron phase space striking the x-ray target. This target phase space was then used in BEAM Monte Carlo simulations to generate percent depth doses curves for this new linac, which were then used to re-optimize the waveguide geometry. Results: The shunt impedance, Q-factor, and peak-to-mean electric field ratio were matched to those published for the breakdown study to within 0.1% error. After tuning the full waveguide, the peak surface fields are calculated to be 207 MV/m, 13% below the breakdown threshold, and a d-max depth of 2.42 cm, a D10/20 value of 1.59, compared to 2.45 cm and 1.59, respectively, for the simulated Varian 10 MV linac and brehmsstrahlung production efficiency 20% lower than a simulated Varian 10 MV linac. Conclusion: This work demonstrates the design of a functional 27.5 cm waveguide producing 10 MV photons with characteristics similar to a Varian 10 MV linac

  3. Performance analysis based on a Monte Carlo simulation of a liquid xenon PET detector

    Liquid xenon is a very attractive medium for position-sensitive gamma-ray detectors for a very wide range of applications, namely, in medical radionuclide imaging. Recently, the authors have proposed a liquid xenon detector for positron emission tomography (PET). In this paper, some aspects of the performance of a liquid xenon PET detector prototype were studied by means of Monte Carlo simulation

  4. Analysis of the distribution of X-ray characteristic production using the Monte Carlo methods

    The Monte Carlo method has been applied for the simulation of electron trajectories in a bulk sample, and therefore for the distribution of signals produced in an electron microprobe. Results for the function φ(ρz) are compared with experimental data. Some conclusions are drawn with respect to the parameters involved in the gaussian model. (Author)

  5. Exploring Monte Carlo methods

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  6. Statistical analysis for discrimination of prompt gamma ray peak induced by high energy neutron: Monte Carlo simulation study

    Kim, Moo-Sub; Jung, Joo-Young; Suh, Tae Suk [College of Medicine, Catholic University of Korea, Seoul (Korea, Republic of)

    2015-05-15

    The purpose of this research was the statistical analysis for discrimination of the prompt gamma ray peak induced by the 14.1 MeV neutron particles from spectra using Monte Carlo simulation. For the simulation, the information of the eighteen detector materials was used to simulate spectra by the neutron capture reaction. To the best of our knowledge, the results in this study are the first reported data regarding the peak discrimination of high energy prompt gamma ray using the many cases (the eighteen detector materials and the nine prompt gamma ray peaks). The reliable data based on the Monte Carlo method and statistical method with the identical conditions was deducted. Our results are important data in the PGAA study for the peak detection within actual experiments.

  7. Statistical analysis for discrimination of prompt gamma ray peak induced by high energy neutron: Monte Carlo simulation study

    The purpose of this research was the statistical analysis for discrimination of the prompt gamma ray peak induced by the 14.1 MeV neutron particles from spectra using Monte Carlo simulation. For the simulation, the information of the eighteen detector materials was used to simulate spectra by the neutron capture reaction. To the best of our knowledge, the results in this study are the first reported data regarding the peak discrimination of high energy prompt gamma ray using the many cases (the eighteen detector materials and the nine prompt gamma ray peaks). The reliable data based on the Monte Carlo method and statistical method with the identical conditions was deducted. Our results are important data in the PGAA study for the peak detection within actual experiments

  8. Status of software for PGNAA bulk analysis by the Monte Carlo - Library Least-Squares (MCLLS) approach

    The Center for Engineering Applications of Radioisotopes (CEAR) has been working for about ten years on the Monte Carlo - Library Least-Squares (MCLLS) approach for treating the nonlinear inverse analysis problem for PGNAA bulk analysis. This approach consists essentially of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required libraries. These libraries are then used in the linear Library Least-Squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. The other libraries include all sources of background which includes: (1) gamma-rays emitted by the neutron source, (2) prompt gamma-rays produced in the analyzer construction materials, (3) natural gamma-rays from K-40 and the uranium and thorium decay chains, and (4) prompt and decay gamma-rays produced in the NaI detector by neutron activation. A number of unforeseen problems have arisen in pursuing this approach including: (1) the neutron activation of the most common detector (NaI) used in bulk analysis PGNAA systems, (2) the nonlinearity of this detector, and (3) difficulties in obtaining detector response functions for this (and other) detectors. These problems have been addressed by CEAR recently and have either been solved or are almost solved at the present time. Development of Monte Carlo simulation for all of the libraries has been finished except the prompt gamma-ray library from the activation of the NaI detector. Treatment for the coincidence schemes for Na and particularly I must be first determined to complete the Monte Carlo simulation of this last library. (author)

  9. Criticality analysis of thermal reactors for two energy groups applying Monte Carlo and neutron Albedo method

    The Albedo method applied to criticality calculations to nuclear reactors is characterized by following the neutron currents, allowing to make detailed analyses of the physics phenomena about interactions of the neutrons with the core-reflector set, by the determination of the probabilities of reflection, absorption, and transmission. Then, allowing to make detailed appreciations of the variation of the effective neutron multiplication factor, keff. In the present work, motivated for excellent results presented in dissertations applied to thermal reactors and shieldings, was described the methodology to Albedo method for the analysis criticality of thermal reactors by using two energy groups admitting variable core coefficients to each re-entrant current. By using the Monte Carlo KENO IV code was analyzed relation between the total fraction of neutrons absorbed in the core reactor and the fraction of neutrons that never have stayed into the reflector but were absorbed into the core. As parameters of comparison and analysis of the results obtained by the Albedo method were used one dimensional deterministic code ANISN (ANIsotropic SN transport code) and Diffusion method. The keff results determined by the Albedo method, to the type of analyzed reactor, showed excellent agreement. Thus were obtained relative errors of keff values smaller than 0,78% between the Albedo method and code ANISN. In relation to the Diffusion method were obtained errors smaller than 0,35%, showing the effectiveness of the Albedo method applied to criticality analysis. The easiness of application, simplicity and clarity of the Albedo method constitute a valuable instrument to neutronic calculations applied to nonmultiplying and multiplying media. (author)

  10. Analysis of uncertainty quantification method by comparing Monte-Carlo method and Wilk's formula

    An analysis of the uncertainty quantification related to LBLOCA using the Monte-Carlo calculation has been performed and compared with the tolerance level determined by the Wilks' formula. The uncertainty range and distribution of each input parameter associated with the LOCA phenomena were determined based on previous PIRT results and documentation during the BEMUSE project. Calulations were conducted on 3,500 cases within a 2-week CPU time on a 14-PC cluster system. The Monte-Carlo exercise shows that the 95% upper limit PCT value can be obtained well, with a 95% confidence level using the Wilks' formula, although we have to endure a 5% risk of PCT under-prediction. The results also show that the statistical fluctuation of the limit value using Wilks' first-order is as large as the uncertainty value itself. It is therefore desirable to increase the order of the Wilks' formula to be higher than the second-order to estimate the reliable safety margin of the design features. It is also shown that, with its ever increasing computational capability, the Monte-Carlo method is accessible for a nuclear power plant safety analysis within a realistic time frame.

  11. Shielding analysis of proton therapy accelerators: a demonstration using Monte Carlo-generated source terms and attenuation lengths.

    Lai, Bo-Lun; Sheu, Rong-Jiun; Lin, Uei-Tyng

    2015-05-01

    Monte Carlo simulations are generally considered the most accurate method for complex accelerator shielding analysis. Simplified models based on point-source line-of-sight approximation are often preferable in practice because they are intuitive and easy to use. A set of shielding data, including source terms and attenuation lengths for several common targets (iron, graphite, tissue, and copper) and shielding materials (concrete, iron, and lead) were generated by performing Monte Carlo simulations for 100-300 MeV protons. Possible applications and a proper use of the data set were demonstrated through a practical case study, in which shielding analysis on a typical proton treatment room was conducted. A thorough and consistent comparison between the predictions of our point-source line-of-sight model and those obtained by Monte Carlo simulations for a 360° dose distribution around the room perimeter showed that the data set can yield fairly accurate or conservative estimates for the transmitted doses, except for those near the maze exit. In addition, this study demonstrated that appropriate coupling between the generated source term and empirical formulae for radiation streaming can be used to predict a reasonable dose distribution along the maze. This case study proved the effectiveness and advantage of applying the data set to a quick shielding design and dose evaluation for proton therapy accelerators. PMID:25811254

  12. Performance Analysis of Korean Liquid metal type TBM based on Monte Carlo code

    The objective of this project is to analyze a nuclear performance of the Korean HCML(Helium Cooled Molten Lithium) TBM(Test Blanket Module) which will be installed in ITER(International Thermonuclear Experimental Reactor). This project is intended to analyze a neutronic design and nuclear performances of the Korean HCML ITER TBM through the transport calculation of MCCARD. In detail, we will conduct numerical experiments for analyzing the neutronic design of the Korean HCML TBM and the DEMO fusion blanket, and improving the nuclear performances. The results of the numerical experiments performed in this project will be utilized further for a design optimization of the Korean HCML TBM. In this project, Monte Carlo transport calculations for evaluating TBR (Tritium Breeding Ratio) and EMF (Energy Multiplication factor) were conducted to analyze a nuclear performance of the Korean HCML TBM. The activation characteristics and shielding performances for the Korean HCML TBM were analyzed using ORIGEN and MCCARD. We proposed the neutronic methodologies for analyzing the nuclear characteristics of the fusion blanket, which was applied to the blanket analysis of a DEMO fusion reactor. In the results, the TBR of the Korean HCML ITER TBM is 0.1352 and the EMF is 1.362. Taking into account a limitation for the Li amount in ITER TBM, it is expected that tritium self-sufficiency condition can be satisfied through a change of the Li quantity and enrichment. In the results of activation and shielding analysis, the activity drops to 1.5% of the initial value and the decay heat drops to 0.02% of the initial amount after 10 years from plasma shutdown

  13. Uncertainty analysis in the simulation of an HPGe detector using the Monte Carlo Code MCNP5

    A gamma spectrometer including an HPGe detector is commonly used for environmental radioactivity measurements. Many works have been focused on the simulation of the HPGe detector using Monte Carlo codes such as MCNP5. However, the simulation of this kind of detectors presents important difficulties due to the lack of information from manufacturers and due to loss of intrinsic properties in aging detectors. Some parameters such as the active volume or the Ge dead layer thickness are many times unknown and are estimated during simulations. In this work, a detailed model of an HPGe detector and a petri dish containing a certified gamma source has been done. The certified gamma source contains nuclides to cover the energy range between 50 and 1800 keV. As a result of the simulation, the Pulse Height Distribution (PHD) is obtained and the efficiency curve can be calculated from net peak areas and taking into account the certified activity of the source. In order to avoid errors due to the net area calculation, the simulated PHD is treated using the GammaVision software. On the other hand, it is proposed to use the Noether-Wilks formula to do an uncertainty analysis of model with the main goal of determining the efficiency curve of this detector and its associated uncertainty. The uncertainty analysis has been focused on dead layer thickness at different positions of the crystal. Results confirm the important role of the dead layer thickness in the low energy range of the efficiency curve. In the high energy range (from 300 to 1800 keV) the main contribution to the absolute uncertainty is due to variations in the active volume. (author)

  14. Monte Carlo shielding comparative analysis applied to TRIGA HEU and LEU spent fuel transport

    Margeanu, C. A.; Iorgulis, C. [Reactor Physics, Nuclear Fuel Performances and Nuclear Safety Department, Institute for Nuclear Research Pitesti, P.O Box 78, Pitesti (Romania); Margeanu, S. [Radiation Protection Department, Institute for Nuclear Research Pitesti, Pitesti (Romania); Barbos, D. [TRIGA Research Reactor Department, Institute for Nuclear Research Pitesti, Pitesti (Romania)

    2009-07-01

    The paper is a comparative study of LEU (low uranium enrichment) and HEU (highly enriched uranium) fuel utilization effects for the shielding analysis during spent fuel transport. A comparison against the measured data for HEU spent fuel, available from the last stage of spent fuel repatriation fulfilled in the summer of 2008, is also presented. All geometrical and material data for the shipping cask were considered according to NAC-LWT Cask approved model. The shielding analysis estimates radiation doses to shipping cask wall surface, and in air at 1 m and 2 m, respectively, from the cask by means of 3-dimensional Monte Carlo MORSE-SGC code. Before loading into the shipping cask TRIGA spent fuel source terms and spent fuel parameters have been obtained by means of ORIGEN-S code. Both codes are included in ORNL's SCALE 5 programs package. {sup 60}Co radioactivity is important for HEU spent fuel; actinides contribution to total fuel radioactivity is low. For LEU spent fuel {sup 60}Co radioactivity is insignificant; actinides contribution to total fuel radioactivity is high. Dose rates for both HEU and LEU fuel contents are below regulatory limits, LEU spent fuel photon dose rates being greater than the HEU ones. The comparison between HEU spent fuel theoretical and measured dose rates in selected measuring points shows a good agreement, the calculated values being greater than the measured ones both to cask wall surface (about 34% relative difference) and in air at 1 m distance from the cask surface (about 15% relative difference). (authors)

  15. In-silico analysis on biofabricating vascular networks using kinetic Monte Carlo simulations

    We present a computational modeling approach to study the fusion of multicellular aggregate systems in a novel scaffold-less biofabrication process, known as ‘bioprinting’. In this novel technology, live multicellular aggregates are used as fundamental building blocks to make tissues or organs (collectively known as the bio-constructs,) via the layer-by-layer deposition technique or other methods; the printed bio-constructs embedded in maturogens, consisting of nutrient-rich bio-compatible hydrogels, are then placed in bioreactors to undergo the cellular aggregate fusion process to form the desired functional bio-structures. Our approach reported here is an agent-based modeling method, which uses the kinetic Monte Carlo (KMC) algorithm to evolve the cellular system on a lattice. In this method, the cells and the hydrogel media, in which cells are embedded, are coarse-grained to material’s points on a three-dimensional (3D) lattice, where the cell–cell and cell–medium interactions are quantified by adhesion and cohesion energies. In a multicellular aggregate system with a fixed number of cells and fixed amount of hydrogel media, where the effect of cell differentiation, proliferation and death are tactically neglected, the interaction energy is primarily dictated by the interfacial energy between cell and cell as well as between cell and medium particles on the lattice, respectively, based on the differential adhesion hypothesis. By using the transition state theory to track the time evolution of the multicellular system while minimizing the interfacial energy, KMC is shown to be an efficient time-dependent simulation tool to study the evolution of the multicellular aggregate system. In this study, numerical experiments are presented to simulate fusion and cell sorting during the biofabrication process of vascular networks, in which the bio-constructs are fabricated via engineering designs. The results predict the feasibility of fabricating the vascular

  16. Development of CAD-Based Geometry Processing Module for a Monte Carlo Particle Transport Analysis Code

    As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module

  17. Dynamic fault tree analysis using Monte Carlo simulation in probabilistic safety assessment

    Durga Rao, K. [Bhabha Atomic Research Centre, Mumbai (India)], E-mail: durga_k_rao@yahoo.com; Gopika, V.; Sanyasi Rao, V.V.S.; Kushwaha, H.S. [Bhabha Atomic Research Centre, Mumbai (India); Verma, A.K.; Srividya, A. [Indian Institute of Technology Bombay, Mumbai (India)

    2009-04-15

    Traditional fault tree (FT) analysis is widely used for reliability and safety assessment of complex and critical engineering systems. The behavior of components of complex systems and their interactions such as sequence- and functional-dependent failures, spares and dynamic redundancy management, and priority of failure events cannot be adequately captured by traditional FTs. Dynamic fault tree (DFT) extend traditional FT by defining additional gates called dynamic gates to model these complex interactions. Markov models are used in solving dynamic gates. However, state space becomes too large for calculation with Markov models when the number of gate inputs increases. In addition, Markov model is applicable for only exponential failure and repair distributions. Modeling test and maintenance information on spare components is also very difficult. To address these difficulties, Monte Carlo simulation-based approach is used in this work to solve dynamic gates. The approach is first applied to a problem available in the literature which is having non-repairable components. The obtained results are in good agreement with those in literature. The approach is later applied to a simplified scheme of electrical power supply system of nuclear power plant (NPP), which is a complex repairable system having tested and maintained spares. The results obtained using this approach are in good agreement with those obtained using analytical approach. In addition to point estimates of reliability measures, failure time, and repair time distributions are also obtained from simulation. Finally a case study on reactor regulation system (RRS) of NPP is carried out to demonstrate the application of simulation-based DFT approach to large-scale problems.

  18. Dynamic fault tree analysis using Monte Carlo simulation in probabilistic safety assessment

    Traditional fault tree (FT) analysis is widely used for reliability and safety assessment of complex and critical engineering systems. The behavior of components of complex systems and their interactions such as sequence- and functional-dependent failures, spares and dynamic redundancy management, and priority of failure events cannot be adequately captured by traditional FTs. Dynamic fault tree (DFT) extend traditional FT by defining additional gates called dynamic gates to model these complex interactions. Markov models are used in solving dynamic gates. However, state space becomes too large for calculation with Markov models when the number of gate inputs increases. In addition, Markov model is applicable for only exponential failure and repair distributions. Modeling test and maintenance information on spare components is also very difficult. To address these difficulties, Monte Carlo simulation-based approach is used in this work to solve dynamic gates. The approach is first applied to a problem available in the literature which is having non-repairable components. The obtained results are in good agreement with those in literature. The approach is later applied to a simplified scheme of electrical power supply system of nuclear power plant (NPP), which is a complex repairable system having tested and maintained spares. The results obtained using this approach are in good agreement with those obtained using analytical approach. In addition to point estimates of reliability measures, failure time, and repair time distributions are also obtained from simulation. Finally a case study on reactor regulation system (RRS) of NPP is carried out to demonstrate the application of simulation-based DFT approach to large-scale problems

  19. Romania Monte Carlo Methods Application to CANDU Spent Fuel Comparative Analysis

    Romania has a single NPP at Cernavoda with 5 PHWR reactors of CANDU6 type of 705 MW(e) each, with Cernavoda Unit1, operational starting from December 1996, Unit2 under construction while the remaining Unit3-5 is being conserved. The nuclear energy world wide development is accompanied by huge quantities of spent nuclear fuel accumulation. Having in view the possible impact upon population and environment, in all activities associated to nuclear fuel cycle, namely transportation, storage, reprocessing or disposal, the spent fuel characteristics must be well known. The paper aim is to apply Monte Carlo methods to CANDU spent fuel analysis, starting from the discharge moment, followed by spent fuel transport after a defined cooling period and finishing with the intermediate dry storage. As radiation source 3 CANDU fuels have been considered: standard 37 rods fuel bundle with natural UO2 and SEU fuels, and 43 rods fuel bundle with SEU fuel. After a criticality calculation using KENO-VI code, the criticality coefficient and the actinides and fission products concentrations are obtained. By using ORIGEN-S code, the photon source profiles are calculated and the spent fuel characteristics estimation is done. For the shielding calculations MORSE-SGC code has been used. Regarding to the spent fuel transport, the photon dose rates to the shipping cask wall and in air, at different distances from the cask, are estimated. The shielding calculation for the spent fuel intermediate dry storage is done and the photon dose rates at the storage basket wall (active element of the Cernavoda NPP intermediate dry storage) are obtained. A comparison between the 3 types of CANDU fuels is presented. (authors)

  20. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    Beaujean, Frederik

    2012-11-12

    Searching for new physics in rare B meson decays governed by b {yields} s transitions, we perform a model-independent global fit of the short-distance couplings C{sub 7}, C{sub 9}, and C{sub 10} of the {Delta}B=1 effective field theory. We assume the standard-model set of b {yields} s{gamma} and b {yields} sl{sup +}l{sup -} operators with real-valued C{sub i}. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B{yields}K{sup *}{gamma}, B{yields}K{sup (*)}l{sup +}l{sup -}, and B{sub s}{yields}{mu}{sup +}{mu}{sup -} decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit

  1. A Bayesian analysis of rare B decays with advanced Monte Carlo methods

    Searching for new physics in rare B meson decays governed by b → s transitions, we perform a model-independent global fit of the short-distance couplings C7, C9, and C10 of the ΔB=1 effective field theory. We assume the standard-model set of b → sγ and b → sl+l- operators with real-valued Ci. A total of 59 measurements by the experiments BaBar, Belle, CDF, CLEO, and LHCb of observables in B→K*γ, B→K(*)l+l-, and Bs→μ+μ- decays are used in the fit. Our analysis is the first of its kind to harness the full power of the Bayesian approach to probability theory. All main sources of theory uncertainty explicitly enter the fit in the form of nuisance parameters. We make optimal use of the experimental information to simultaneously constrain theWilson coefficients as well as hadronic form factors - the dominant theory uncertainty. Generating samples from the posterior probability distribution to compute marginal distributions and predict observables by uncertainty propagation is a formidable numerical challenge for two reasons. First, the posterior has multiple well separated maxima and degeneracies. Second, the computation of the theory predictions is very time consuming. A single posterior evaluation requires O(1s), and a few million evaluations are needed. Population Monte Carlo (PMC) provides a solution to both issues; a mixture density is iteratively adapted to the posterior, and samples are drawn in a massively parallel way using importance sampling. The major shortcoming of PMC is the need for cogent knowledge of the posterior at the initial stage. In an effort towards a general black-box Monte Carlo sampling algorithm, we present a new method to extract the necessary information in a reliable and automatic manner from Markov chains with the help of hierarchical clustering. Exploiting the latest 2012 measurements, the fit reveals a flipped-sign solution in addition to a standard-model-like solution for the couplings Ci. The two solutions are related

  2. Implementation of 3D Lattice Monte Carlo Simulation on a Cluster of Symmetric Multiprocessors%基于集群系统的3D格点Monte Carlo算法并行实现

    雷咏梅; 蒋英; 冯捷

    2002-01-01

    This paper presents a new approach to parallelize 3D lattice Monte Carlo algorithms used in the numerical simulation of polymer on ZiQiang 2000-a cluster of symmetric multiprocessors (SMPs). The combined load for cell and energy calculations over the time step is balanced together to form a single spatial decomposition. Basic aspects and strategies of running Monte Carlo calculations on parallel computers are studied. Different steps involved in porting the software on a parallel architecture based on ZiQiang 2000 running under Linux and MPI are described briefly. It is found that parallelization becomes more advantageous when either the lattice is very large or the model contains many cells and chains.

  3. Monte Carlo simulation applied to order economic analysis Simulação de Monte Carlo aplicada à análise econômica de pedido

    Abraão Freires Saraiva Júnior

    2011-03-01

    Full Text Available The use of mathematical and statistical methods can help managers to deal with decision-making difficulties in the business environment. Some of these decisions are related to productive capacity optimization in order to obtain greater economic gains for the company. Within this perspective, this study aims to present the establishment of metrics to support economic decisions related to process or not orders in a company whose products have great variability in variable direct costs per unit that generates accounting uncertainties. To achieve this objective, is proposed a five-step method built from the integration of Management Accounting and Operations Research techniques, emphasizing the Monte Carlo simulation. The method is applied from a didactic example which uses real data achieved through a field research carried out in a plastic products industry that employ recycled material. Finally, it is concluded that the Monte Carlo simulation is effective for treating variable direct costs per unit variability and that the proposed method is useful to support decision-making related to order acceptance.A utilização de métodos matemáticos e estatísticos pode auxiliar gestores a lidar com dificuldades do processo de tomada de decisão no ambiente de negócios. Algumas dessas decisões estão relacionadas à otimização da utilização da capacidade produtiva visando a obtenção de melhores resultados econômicos para a empresa. Dentro dessa perspectiva, o presente trabalho objetiva apresentar o estabelecimento de métricas que deem suporte à decisão econômica de atender ou não a pedidos em uma empresa cujos produtos têm grande variabilidade de custos variáveis diretos unitários que gera incertezas contábeis. Para cumprir esse objetivo, é proposto um método em cinco etapas, construído a partir da integração de técnicas provindas da contabilidade gerencial e da pesquisa operacional, com destaque à simulação de Monte Carlo. O m

  4. Evaluation of CASMO-3 and HELIOS for Fuel Assembly Analysis from Monte Carlo Code

    Shim, Hyung Jin; Song, Jae Seung; Lee, Chung Chan

    2007-05-15

    This report presents a study comparing deterministic lattice physics calculations with Monte Carlo calculations for LWR fuel pin and assembly problems. The study has focused on comparing results from the lattice physics code CASMO-3 and HELIOS against those from the continuous-energy Monte Carlo code McCARD. The comparisons include k{sub inf}, isotopic number densities, and pin power distributions. The CASMO-3 and HELIOS calculations for the k{sub inf}'s of the LWR fuel pin problems show good agreement with McCARD within 956pcm and 658pcm, respectively. For the assembly problems with Gadolinia burnable poison rods, the largest difference between the k{sub inf}'s is 1463pcm with CASMO-3 and 1141pcm with HELIOS. RMS errors for the pin power distributions of CASMO-3 and HELIOS are within 1.3% and 1.5%, respectively.

  5. A numerical analysis of antithetic variates in Monte Carlo radiation transport with geometrical surface splitting

    A numerical study for effective implementation of the antithetic variates technique with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. The study is based on the theory of Monte Carlo errors where a set of coupled integral equations are solved for the first and second moments of the score and for the expected number of flights per particle history. Numerical results are obtained for particle transmission through an infinite homogeneous slab shield composed of an isotropically scattering medium. Two types of antithetic transformations are considered. The results indicate that the antithetic transformations always lead to reduction in variance and increase in efficiency provided optimal antithetic parameters are chosen. A substantial gain in efficiency is obtained by incorporating antithetic transformations in rule of thumb splitting. The advantage gained for thick slabs (∼20 mfp) with low scattering probability (0.1-0.5) is attractively large . (author). 27 refs., 9 tabs

  6. Regeneration and Fixed-Width Analysis of Markov Chain Monte Carlo Algorithms

    Latuszynski, Krzysztof

    2009-07-01

    In the thesis we take the split chain approach to analyzing Markov chains and use it to establish fixed-width results for estimators obtained via Markov chain Monte Carlo procedures (MCMC). Theoretical results include necessary and sufficient conditions in terms of regeneration for central limit theorems for ergodic Markov chains and a regenerative proof of a CLT version for uniformly ergodic Markov chains with E_{π}f^2< infty. To obtain asymptotic confidence intervals for MCMC estimators, strongly consistent estimators of the asymptotic variance are essential. We relax assumptions required to obtain such estimators. Moreover, under a drift condition, nonasymptotic fixed-width results for MCMC estimators for a general state space setting (not necessarily compact) and not necessarily bounded target function f are obtained. The last chapter is devoted to the idea of adaptive Monte Carlo simulation and provides convergence results and law of large numbers for adaptive procedures under path-stability condition for transition kernels.

  7. Monte Carlo Renormalization Group Analysis of Lattice $\\phi^4$ Model in $D=3,4$

    Itakura, M

    1999-01-01

    We present a simple, sophisticated method to capture renormalization group flow in Monte Carlo simulation, which provides important information of critical phenomena. We applied the method to $D=3,4$ lattice $\\phi^4$ model and obtained renormalization flow diagram which well reproduces theoretically predicted behavior of continuum $\\phi^4$ model. We also show that the method can be easily applied to much more complicated models, such as frustrated spin models.

  8. Monte Carlo analysis of the terahertz difference frequency generation susceptibility in quantum cascade laser structures.

    Jirauschek, Christian; Okeil, Hesham; Lugli, Paolo

    2015-01-26

    Based on self-consistent ensemble Monte Carlo simulations coupled to the optical field dynamics, we investigate the giant nonlinear susceptibility giving rise to terahertz difference frequency generation in quantum cascade laser structures. Specifically, the dependence on temperature, bias voltage and frequency is considered. It is shown that the optical nonlinearity is temperature insensitive and covers a broad spectral range, as required for widely tunable room temperature terahertz sources. The obtained results are consistent with available experimental data. PMID:25835923

  9. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  10. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    Ivanov, A.; Sanchez, V. [Karlsruhe Inst. of Technology, Inst. for Neutron Physics and Reactor Technology, Herman-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Hoogenboom, J. E. [Delft Univ. of Technology, Faculty of Applied Sciences, Mekelweg 15, 2629 JB Delft (Netherlands)

    2012-07-01

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  11. Nonlinear Stochastic stability analysis of Wind Turbine Wings by Monte Carlo Simulations

    Larsen, Jesper Winther; Iwankiewiczb, R.; Nielsen, Søren R.K.

    2007-01-01

    under narrow-banded excitation, and it is shown that the qualitative behaviour of the strange attractor is very similar for the periodic and almost periodic responses, whereas the strange attractor for the chaotic case loses structure as the excitation becomes narrow-banded. Furthermore, the...... characteristic behaviour of the strange attractor is shown to be identifiable by the so-called information dimension. Due to the complexity of the coupled nonlinear structural system all analyses are carried out via Monte Carlo simulations....

  12. Final Technical Report - Large Deviation Methods for the Analysis and Design of Monte Carlo Schemes in Physics and Chemistry - DE-SC0002413

    Dupuis, Paul [Brown University

    2014-03-14

    This proposal is concerned with applications of Monte Carlo to problems in physics and chemistry where rare events degrade the performance of standard Monte Carlo. One class of problems is concerned with computation of various aspects of the equilibrium behavior of some Markov process via time averages. The problem to be overcome is that rare events interfere with the efficient sampling of all relevant parts of phase space. A second class concerns sampling transitions between two or more stable attractors. Here, rare events do not interfere with the sampling of all relevant parts of phase space, but make Monte Carlo inefficient because of the very large number of samples required to obtain variance comparable to the quantity estimated. The project uses large deviation methods for the mathematical analyses of various Monte Carlo techniques, and in particular for algorithmic analysis and design. This is done in the context of relevant application areas, mainly from chemistry and biology.

  13. Analysis of void coefficient in fast spectrum BWR core with Monte Carlo code 'MVP'

    An innovative large BWR core concept has been proposed for aiming at fuel breeding as well as negative void reactivity coefficient. The core consists of two types of MOX fuel assemblies. One is a triangular tight lattice bundle 1.6 m in active core height and the other is the same bundle 0.8 m. The ratio of flow area to fuel area of the bundle is set at about 0.5 in order to increase breeding ratio. A neutron-streaming channel that consists of a cavity-can containing helium gas and a flow gap between the cavity-can and the channel box is located above each short bundle. It will decrease void reactivity coefficient by enhancing neutron leakage from the core when the void fraction is increased in the flow gap. A core composed of tight lattice bundles provides a much harder neutron spectrum than that of conventional BWRs but a slightly softer one than that of typical FBRs. The cavity-can and the flow gap will cause a steep gradient of neutron flux. The neutronics for such a complicated core structure could not be properly analyzed by conventional analysis methods. In particular, the analysis of void reactivity coefficient requires a sophisticated method because it deals with a small change in core composition. In the analysis of the void reactivity coefficient, we adopted a three-dimensional Monte Carlo code 'MVP', which has been developed by JAERI and has many advantages such as an easy input form for lattice structures, a short run time and a continuous neutron energy method. The continuous neutron energy method is important for the analysis of this core because fission reactions occur mainly in the resonance energy region, where the evaluation of accurate cross sections is difficult with conventional methods. The library used is JENDL-3.2. The multi-layer structure of lattices is also essential for the analysis because its hard spectrum and relatively long neutron mean free path require a modeling for the full core with a lot of bundles. The analysis indicates that

  14. Monte Carlo-based multiphysics coupling analysis of x-ray pulsar telescope

    Li, Liansheng; Deng, Loulou; Mei, Zhiwu; Zuo, Fuchang; Zhou, Hao

    2015-10-01

    X-ray pulsar telescope (XPT) is a complex optical payload, which involves optical, mechanical, electrical and thermal disciplines. The multiphysics coupling analysis (MCA) plays an important role in improving the in-orbit performance. However, the conventional MCA methods encounter two serious problems in dealing with the XTP. One is that both the energy and reflectivity information of X-ray can't be taken into consideration, which always misunderstands the essence of XPT. Another is that the coupling data can't be transferred automatically among different disciplines, leading to computational inefficiency and high design cost. Therefore, a new MCA method for XPT is proposed based on the Monte Carlo method and total reflective theory. The main idea, procedures and operational steps of the proposed method are addressed in detail. Firstly, it takes both the energy and reflectivity information of X-ray into consideration simultaneously. And formulate the thermal-structural coupling equation and multiphysics coupling analysis model based on the finite element method. Then, the thermalstructural coupling analysis under different working conditions has been implemented. Secondly, the mirror deformations are obtained using construction geometry function. Meanwhile, the polynomial function is adopted to fit the deformed mirror and meanwhile evaluate the fitting error. Thirdly, the focusing performance analysis of XPT can be evaluated by the RMS. Finally, a Wolter-I XPT is taken as an example to verify the proposed MCA method. The simulation results show that the thermal-structural coupling deformation is bigger than others, the vary law of deformation effect on the focusing performance has been obtained. The focusing performances of thermal-structural, thermal, structural deformations have degraded 30.01%, 14.35% and 7.85% respectively. The RMS of dispersion spot are 2.9143mm, 2.2038mm and 2.1311mm. As a result, the validity of the proposed method is verified through

  15. Source convergence diagnostics using Boltzmann entropy criterion application to different OECD/NEA criticality benchmarks with the 3-D Monte Carlo code Tripoli-4

    The measurement of the stationarity of Monte Carlo fission source distributions in keff calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)

  16. Small angle neutron scattering by unfolded proteins: analysis using Monte Carlo simulation and molecular mechanics

    Small Angle Neutron Scattering (SANS) experiments have been performed on highly unfolded phosphoglycerate kinase (PGK) obtained by denaturation in 4M guanidinium chloride. The data were initially interpreted using analytical models in which the scattering density associated with protein was represented as a Freely Jointed Chain (FJC) of contiguous spheres. We have recently developed from the same data a Monte Carlo simulation technique with experimental constraints for sampling the configurational distribution of various low resolution models, including FJC. In all these models the unfolded protein is pictured as a chain of N contiguous spheres were N is an independent parameter. The models differ, however, by the degree of interpenetration of neighbours higher than second order. Configurationally averaged scattering profiles coming from different models and different N are fitted to the data at very low q using the method described in a previous communication and the similarity of the model curve to the experiment is examined over the rest of the q range. With this method we have demonstrated that models incorporating an excluded volume condition reproduce markedly better the SANS profiles than FJC. The best agreement was obtained for an excluded volume chain model (EVC) of 82 spheres with hard core of 0.7, i.e. an interpenetration of 0.3. For PGK this corresponds to 5 aa/sphere. Sphere model configurations can then be used to generate configurations at atomic level using molecular mechanics. Different models of the local conformation of the polypeptide chain can thus be tested. Reconstruction of the scattering curve from atomic level configurations demonstrates that in the intermediate q region the SANS signal is sensitive to the overall phi/psi distribution of the protein, being an indicator of the presence or absence of native secondary structure. Our analysis demonstrates also that at high q the SANS signal is driven by associated solvent and counter ion cloud

  17. Monte Carlo analysis of an ODE Model of the Sea Urchin Endomesoderm Network

    Klipp Edda

    2009-08-01

    Full Text Available Abstract Background Gene Regulatory Networks (GRNs control the differentiation, specification and function of cells at the genomic level. The levels of interactions within large GRNs are of enormous depth and complexity. Details about many GRNs are emerging, but in most cases it is unknown to what extent they control a given process, i.e. the grade of completeness is uncertain. This uncertainty stems from limited experimental data, which is the main bottleneck for creating detailed dynamical models of cellular processes. Parameter estimation for each node is often infeasible for very large GRNs. We propose a method, based on random parameter estimations through Monte-Carlo simulations to measure completeness grades of GRNs. Results We developed a heuristic to assess the completeness of large GRNs, using ODE simulations under different conditions and randomly sampled parameter sets to detect parameter-invariant effects of perturbations. To test this heuristic, we constructed the first ODE model of the whole sea urchin endomesoderm GRN, one of the best studied large GRNs. We find that nearly 48% of the parameter-invariant effects correspond with experimental data, which is 65% of the expected optimal agreement obtained from a submodel for which kinetic parameters were estimated and used for simulations. Randomized versions of the model reproduce only 23.5% of the experimental data. Conclusion The method described in this paper enables an evaluation of network topologies of GRNs without requiring any parameter values. The benefit of this method is exemplified in the first mathematical analysis of the complete Endomesoderm Network Model. The predictions we provide deliver candidate nodes in the network that are likely to be erroneous or miss unknown connections, which may need additional experiments to improve the network topology. This mathematical model can serve as a scaffold for detailed and more realistic models. We propose that our method can

  18. Statistical analysis for discrimination of prompt gamma ray peak induced by high energy neutron: Monte Carlo simulation study

    The purpose of this research is a statistical analysis for discrimination of prompt gamma ray peak induced by the 14.1 MeV neutron particles from spectra using Monte Carlo simulation. For the simulation, the information of 18 detector materials was used to simulate spectra by the neutron capture reaction. The discrimination of nine prompt gamma ray peaks from the simulation of each detector material was performed. We presented the several comparison indexes of energy resolution performance depending on the detector material using the simulation and statistics for the prompt gamma activation analysis. (author)

  19. Monte Carlo analysis of the Neutron Standards Laboratory of the CIEMAT; Analisis Monte Carlo del Laboratorio de Patrones Neutronicos del CIEMAT

    Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas (Mexico); Mendez V, R. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Av. Complutense 40, 28040 Madrid (Spain); Guzman G, K. A., E-mail: fermineutron@yahoo.com [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2014-10-15

    By means of Monte Carlo methods was characterized the neutrons field produced by calibration sources in the Neutron Standards Laboratory of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT). The laboratory has two neutron calibration sources: {sup 241}AmBe and {sup 252}Cf which are stored in a water pool and are placed on the calibration bench using controlled systems at distance. To characterize the neutrons field was built a three-dimensional model of the room where it was included the stainless steel bench, the irradiation table and the storage pool. The sources model included double encapsulated of steel, as cladding. With the purpose of determining the effect that produces the presence of the different components of the room, during the characterization the neutrons spectra, the total flow and the rapidity of environmental equivalent dose to 100 cm of the source were considered. The presence of the walls, floor and ceiling of the room is causing the most modification in the spectra and the integral values of the flow and the rapidity of environmental equivalent dose. (Author)

  20. The applicability of certain Monte Carlo methods to the analysis of interacting polymers

    Krapp, D.M. Jr. [Univ. of California, Berkeley, CA (United States)

    1998-05-01

    The authors consider polymers, modeled as self-avoiding walks with interactions on a hexagonal lattice, and examine the applicability of certain Monte Carlo methods for estimating their mean properties at equilibrium. Specifically, the authors use the pivoting algorithm of Madras and Sokal and Metroplis rejection to locate the phase transition, which is known to occur at {beta}{sub crit} {approx} 0.99, and to recalculate the known value of the critical exponent {nu} {approx} 0.58 of the system for {beta} = {beta}{sub crit}. Although the pivoting-Metropolis algorithm works well for short walks (N < 300), for larger N the Metropolis criterion combined with the self-avoidance constraint lead to an unacceptably small acceptance fraction. In addition, the algorithm becomes effectively non-ergodic, getting trapped in valleys whose centers are local energy minima in phase space, leading to convergence towards different values of {nu}. The authors use a variety of tools, e.g. entropy estimation and histograms, to improve the results for large N, but they are only of limited effectiveness. Their estimate of {beta}{sub crit} using smaller values of N is 1.01 {+-} 0.01, and the estimate for {nu} at this value of {beta} is 0.59 {+-} 0.005. They conclude that even a seemingly simple system and a Monte Carlo algorithm which satisfies, in principle, ergodicity and detailed balance conditions, can in practice fail to sample phase space accurately and thus not allow accurate estimations of thermal averages. This should serve as a warning to people who use Monte Carlo methods in complicated polymer folding calculations. The structure of the phase space combined with the algorithm itself can lead to surprising behavior, and simply increasing the number of samples in the calculation does not necessarily lead to more accurate results.

  1. Application of direct simulation Monte Carlo method for analysis of AVLIS evaporation process

    The computation code of the direct simulation Monte Carlo (DSMC) method was developed in order to analyze the atomic vapor evaporation in atomic vapor laser isotope separation (AVLIS). The atomic excitation temperatures of gadolinium atom were calculated for the model with five low lying states. Calculation results were compared with the experiments obtained by laser absorption spectroscopy. Two types of DSMC simulations which were different in inelastic collision procedure were carried out. It was concluded that the energy transfer was forbidden unless the total energy of the colliding atoms exceeds a threshold value. (author)

  2. Random vibration analysis of switching apparatus based on Monte Carlo method

    ZHAI Guo-fu; CHEN Ying-hua; REN Wan-bin

    2007-01-01

    The performance in vibration environment of switching apparatus containing mechanical contact is an important element when judging the apparatus's reliability. A piecewise linear two-degrees-of-freedom mathematical model considering contact loss was built in this work, and the vibration performance of the model under random external Gaussian white noise excitation was investigated by using Monte Carlo simulation in Matlab/Simulink. Simulation showed that the spectral content and statistical characters of the contact force coincided strongly with reality. The random vibration character of the contact system was solved using time (numerical) domain simulation in this paper. Conclusions reached here are of great importance for reliability design of switching apparatus.

  3. XSBench. The development and verification of a performance abstraction for Monte Carlo reactor analysis

    We isolate the most computationally expensive steps of a robust nuclear reactor core Monte Carlo particle transport simulation. The hot kernel is then abstracted into a simplified proxy application, designed to mimic the key performance characteristics of the full application. A series of performance verification tests and analyses are carried out to investigate the low-level performance parameters of both the simplified kernel and the full application. The kernel's performance profile is found to closely match that of the application, making it a convenient test bed for performance analyses on cutting edge platforms and experimental next-generation high performance computing architectures. (author)

  4. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

    The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

  5. A Monte Carlo simulation of neutron activation analysis of bulk objects

    Fantidis, J.G. [Faculty of Engineering, Department of Electrical and Computer Engineering, Laboratory of Nuclear Technology, Democritus University of Thrace, Vas. Sofias 12, 67100 Xanthi (Greece); Nicolaou, G. [Faculty of Engineering, Department of Electrical and Computer Engineering, Laboratory of Nuclear Technology, Democritus University of Thrace, Vas. Sofias 12, 67100 Xanthi (Greece)], E-mail: nicolaou@ee.duth.gr; Tsagas, N.F. [Faculty of Engineering, Department of Electrical and Computer Engineering, Laboratory of Nuclear Technology, Democritus University of Thrace, Vas. Sofias 12, 67100 Xanthi (Greece)

    2009-03-15

    A PGNAA facility comprising an isotopic neutron source has been simulated using the Monte Carlo code MCNPX. The facility is envisaged for elemental composition studies of biomedical, environmental and industrial bulk objects. The study carried out, aimed to improve the detection sensitivity of prompt gamma-rays emitted by a bulk object, measured in the presence of higher energy ones. An appropriate collimator, a filter between the neutron source and the object and an optimisation of the positioning of the neutron beam and the detector relative to the object analysed were means to improve the desired sensitivity. The simulation is demonstrated for the in-vivo PGNAA of boron in the human liver.

  6. A Monte Carlo simulation of neutron activation analysis of bulk objects

    A PGNAA facility comprising an isotopic neutron source has been simulated using the Monte Carlo code MCNPX. The facility is envisaged for elemental composition studies of biomedical, environmental and industrial bulk objects. The study carried out, aimed to improve the detection sensitivity of prompt gamma-rays emitted by a bulk object, measured in the presence of higher energy ones. An appropriate collimator, a filter between the neutron source and the object and an optimisation of the positioning of the neutron beam and the detector relative to the object analysed were means to improve the desired sensitivity. The simulation is demonstrated for the in-vivo PGNAA of boron in the human liver.

  7. Analysis of skin tissues spatial fluorescence distribution by the Monte Carlo simulation

    Churmakov, D Y; Piletsky, S A; Greenhalgh, D A

    2003-01-01

    A novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account the spatial distribution of fluorophores, which would arise due to the structure of collagen fibres, compared to the epidermis and stratum corneum where the distribution of fluorophores is assumed to be homogeneous. The results of simulation suggest that distribution of auto- fluorescence is significantly suppressed in the near-infrared spectral region, whereas the spatial distribution of fluorescence sources within a sensor layer embedded in the epidermis is localized at an effective depth.

  8. Monte Carlo Radiative Transfer

    Whitney, Barbara A

    2011-01-01

    I outline methods for calculating the solution of Monte Carlo Radiative Transfer (MCRT) in scattering, absorption and emission processes of dust and gas, including polarization. I provide a bibliography of relevant papers on methods with astrophysical applications.

  9. An Analysis of the Nuclear Data Libraries' Impact on the Criticality Computations Performed using Monte Carlo Codes

    The major aim of this work is a sensitivity analysis related to the influence of the different nuclear data libraries on the k-infinity values and on the void coefficient estimations performed for various CANDU fuel projects, and on the simulations related to the replacement of the original stainless steel adjuster rods by cobalt assemblies in the CANDU reactor core. The computations are performed using the Monte Carlo transport codes MCNP5 and MONTEBURNS 1.0 for the actual, detailed geometry and material composition of the fuel bundles and reactivity devices. Some comparisons with deterministic and probabilistic codes involving the WIMS library are also presented

  10. Sodium void reactivity effect analysis using the newly developed exact perturbation theory in Monte-Carlo code TRIPOLI-4®

    The analysis of void reactivity effect is prominent interest for Sodium-cooled Fast Reactor (SFR) safety. Indeed, in case of sodium leakage of the primary circuit, void reactivity represents the main passive negative feedback to ensure reactivity control. The core can be designed to maximize neutron leakage and lower the average neutron multiplication factor in the event of sodium disappearing from within assemblies. Thus, the nuclear chain reaction is stopped. The most promising solution is to place a sodium region above the fuel in order for neutrons to be reflected when the region is filled and escape when the region is empty. In terms of simulation, this configuration is a challenge for usual calculation schemes: 1. Deterministic codes are typically limited in their ability to homogenize a sub-critical medium as the sodium plenum. 2. Monte Carlo codes are typically not able to split the total reactivity effect on different components, which prevents to achieve straightforward uncertainty analysis. Furthermore, since experimental values can sometimes be small, Monte Carlo codes may not converge within a reasonable computation time. A new feature recently available in the Monte Carlo TRIPOLI-4® based on the Exact Perturbation Theory allows very small reactivity perturbations to be computed accurately as well as reactivity effect to be estimated on distinct isotopes cross-sections. In the first part of this paper, this new feature of the code is described and then applied in the second part to a core configuration composed of several layers of fuel and fertile zones below a sodium plenum. Reactivity and its contributions from specific reactions and energy groups are calculated and compared with the results of the deterministic code ERANOS. The aim of this work is twofold: (1) Achieve a numerical validation of the new TRIPOLI-4® features and (2) Identify where deterministic codes might be less accurate and why – even when using them at full capacity (S16

  11. Monte Carlo transition probabilities

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  12. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    McGraw, David [Desert Research Inst. (DRI), Reno, NV (United States); Hershey, Ronald L. [Desert Research Inst. (DRI), Reno, NV (United States)

    2016-06-01

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries. The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little

  13. Development and Application of MCNP5 and KENO-VI Monte Carlo Models for the Atucha-2 PHWR Analysis

    M. Pecchia

    2011-01-01

    Full Text Available The geometrical complexity and the peculiarities of Atucha-2 PHWR require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Core models of Atucha-2 PHWR were developed using both MCNP5 and KENO-VI codes. The developed models were applied for calculating reactor criticality states at beginning of life, reactor cell constants, and control rods volumes. The last two applications were relevant for performing successive three dimensional neutron kinetic analyses since it was necessary to correctly evaluate the effect of each oblique control rod in each cell discretizing the reactor. These corrective factors were then applied to the cell cross sections calculated by the two-dimensional deterministic lattice physics code HELIOS. These results were implemented in the RELAP-3D model to perform safety analyses for the licensing process.

  14. Development and Application of MCNP5 and KENO-VI Monte Carlo Models for the Atucha-2 PHWR Analysis

    The geometrical complexity and the peculiarities of Atucha-2 PHWR require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Core models of Atucha-2 PHWR were developed using both MCNP5 and KENO-VI codes. The developed models were applied for calculating reactor criticality states at beginning of life, reactor cell constants, and control rods volumes. The last two applications were relevant for performing successive three dimensional neutron kinetic analyses since it was necessary to correctly evaluate the effect of each oblique control rod in each cell discretizing the reactor. These corrective factors were then applied to the cell cross sections calculated by the two-dimensional deterministic lattice physics code Helios. These results were implemented in the RELAP-3D model to perform safety analyses for the licensing process.

  15. Algorithmic choices in WARP – A framework for continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs

    Highlights: • WARP, a GPU-accelerated Monte Carlo neutron transport code, has been developed. • The NVIDIA OptiX high-performance ray tracing library is used to process geometric data. • The unionized cross section representation is modified for higher performance. • Reference remapping is used to keep the GPU busy as neutron batch population reduces. • Reference remapping is done using a key-value radix sort on neutron reaction type. - Abstract: In recent supercomputers, general purpose graphics processing units (GPGPUs) are a significant faction of the supercomputer’s total computational power. GPGPUs have different architectures compared to central processing units (CPUs), and for Monte Carlo neutron transport codes used in nuclear engineering to take advantage of these coprocessor cards, transport algorithms must be changed to execute efficiently on them. WARP is a continuous energy Monte Carlo neutron transport code that has been written to do this. The main thrust of WARP is to adapt previous event-based transport algorithms to the new GPU hardware; the algorithmic choices for all parts of which are presented in this paper. It is found that remapping history data references increases the GPU processing rate when histories start to complete. The main reason for this is that completed data are eliminated from the address space, threads are kept busy, and memory bandwidth is not wasted on checking completed data. Remapping also allows the interaction kernels to be launched concurrently, improving efficiency. The OptiX ray tracing framework and CUDPP library are used for geometry representation and parallel dataset-side operations, ensuring high performance and reliability

  16. Monte Carlo Based Calibration and Uncertainty Analysis of a Coupled Plant Growth and Hydrological Model

    Houska, Tobias; Multsch, Sebastian; Kraft, Philipp; Frede, Hans-Georg; Breuer, Lutz

    2014-05-01

    Computer simulations are widely used to support decision making and planning in the agriculture sector. On the one hand, many plant growth models use simplified hydrological processes and structures, e.g. by the use of a small number of soil layers or by the application of simple water flow approaches. On the other hand, in many hydrological models plant growth processes are poorly represented. Hence, fully coupled models with a high degree of process representation would allow a more detailed analysis of the dynamic behaviour of the soil-plant interface. We used the Python programming language to couple two of such high process oriented independent models and to calibrate both models simultaneously. The Catchment Modelling Framework (CMF) simulated soil hydrology based on the Richards equation and the Van-Genuchten-Mualem retention curve. CMF was coupled with the Plant growth Modelling Framework (PMF), which predicts plant growth on the basis of radiation use efficiency, degree days, water shortage and dynamic root biomass allocation. The Monte Carlo based Generalised Likelihood Uncertainty Estimation (GLUE) method was applied to parameterize the coupled model and to investigate the related uncertainty of model predictions to it. Overall, 19 model parameters (4 for CMF and 15 for PMF) were analysed through 2 x 106 model runs randomly drawn from an equally distributed parameter space. Three objective functions were used to evaluate the model performance, i.e. coefficient of determination (R2), bias and model efficiency according to Nash Sutcliffe (NSE). The model was applied to three sites with different management in Muencheberg (Germany) for the simulation of winter wheat (Triticum aestivum L.) in a cross-validation experiment. Field observations for model evaluation included soil water content and the dry matters of roots, storages, stems and leaves. Best parameter sets resulted in NSE of 0.57 for the simulation of soil moisture across all three sites. The shape

  17. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    Lagerlöf, Jakob H., E-mail: Jakob@radfys.gu.se [Department of Radiation Physics, Göteborg University, Göteborg 41345 (Sweden); Kindblom, Jon [Department of Oncology, Sahlgrenska University Hospital, Göteborg 41345 (Sweden); Bernhardt, Peter [Department of Radiation Physics, Göteborg University, Göteborg 41345, Sweden and Department of Nuclear Medicine, Sahlgrenska University Hospital, Göteborg 41345 (Sweden)

    2014-09-15

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became

  18. Oxygen distribution in tumors: A qualitative analysis and modeling study providing a novel Monte Carlo approach

    Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO2)]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO2), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due

  19. Improving PWR core simulations by Monte Carlo uncertainty analysis and Bayesian inference

    Castro, Emilio; Buss, Oliver; Garcia-Herranz, Nuria; Hoefer, Axel; Porsch, Dieter

    2016-01-01

    A Monte Carlo-based Bayesian inference model is applied to the prediction of reactor operation parameters of a PWR nuclear power plant. In this non-perturbative framework, high-dimensional covariance information describing the uncertainty of microscopic nuclear data is combined with measured reactor operation data in order to provide statistically sound, well founded uncertainty estimates of integral parameters, such as the boron letdown curve and the burnup-dependent reactor power distribution. The performance of this methodology is assessed in a blind test approach, where we use measurements of a given reactor cycle to improve the prediction of the subsequent cycle. As it turns out, the resulting improvement of the prediction quality is impressive. In particular, the prediction uncertainty of the boron letdown curve, which is of utmost importance for the planning of the reactor cycle length, can be reduced by one order of magnitude by including the boron concentration measurement information of the previous...

  20. Analysis of Light Transport Features in Stone Fruits Using Monte Carlo Simulation.

    Chizhu Ding

    Full Text Available The propagation of light in stone fruit tissue was modeled using the Monte Carlo (MC method. Peaches were used as the representative model of stone fruits. The effects of the fruit core and the skin on light transport features in the peaches were assessed. It is suggested that the skin, flesh and core should be separately considered with different parameters to accurately simulate light propagation in intact stone fruit. The detection efficiency was evaluated by the percentage of effective photons and the detection sensitivity of the flesh tissue. The fruit skin decreases the detection efficiency, especially in the region close to the incident point. The choices of the source-detector distance, detection angle and source intensity were discussed. Accurate MC simulations may result in better insight into light propagation in stone fruit and aid in achieving the optimal fruit quality inspection without extensive experimental measurements.

  1. Analysis of aerial survey data on Florida manatee using Markov chain Monte Carlo.

    Craig, B A; Newton, M A; Garrott, R A; Reynolds, J E; Wilcox, J R

    1997-06-01

    We assess population trends of the Atlantic coast population of Florida manatee, Trichechus manatus latirostris, by reanalyzing aerial survey data collected between 1982 and 1992. To do so, we develop an explicit biological model that accounts for the method by which the manatees are counted, the mammals' movement between surveys, and the behavior of the population total over time. Bayesian inference, enabled by Markov chain Monte Carlo, is used to combine the survey data with the biological model. We compute marginal posterior distributions for all model parameters and predictive distributions for future counts. Several conclusions, such as a decreasing population growth rate and low sighting probabilities, are consistent across different prior specifications. PMID:9192449

  2. Calculation and analysis of heat source of PWR assemblies based on Monte Carlo method

    When fission occurs in nuclear fuel in reactor core, it releases numerous neutron and γ radiation, which takes energy deposition in fuel components and yields many factors such as thermal stressing and radiation damage influencing the safe operation of a reactor. Using the three-dimensional Monte Carlo transport calculation program MCNP and continuous cross-section database based on ENDF/B series to calculate the heat rate of the heat source on reference assemblies of a PWR when loading with 18-month short refueling cycle mode, and get the precise values of the control rod, thimble plug and new burnable poison rod within Gd, so as to provide basis for reactor design and safety verification. (authors)

  3. Benchmark analysis of criticality experiments in the TRIGA mark II using a continuous energy Monte Carlo code MCNP

    The criticality analysis of the TRIGA-II benchmark experiment at the Musashi Institute of Technology Research Reactor (MuITR, 100kW) was performed by the three-dimensional continuous-energy Monte Carlo code (MCNP4A). To minimize errors due to an inexact geometry model, all fresh fuels and control rods as well as vicinity of the core were precisely modeled. Effective multiplication factors (keff) in the initial core critical experiment and in the excess reactivity adjustment for the several fuel-loading patterns as well as the fuel element reactivity worth distributions were used in the validation process of the physical model and neutron cross section data from the ENDF/B-V evaluation. The calculated keff overestimated the experimental data by about 1.0%Δk/k for both the initial core and the several fuel-loading arrangements (fuels or graphite elements were added only to the outer-ring), but the discrepancy increased to 1.8%Δk/k for the some fuel-loading patterns (graphite elements were inserted into the inner-ring). The comparison result of the fuel element worth distribution showed above tendency. All in all, the agreement between the MCNP predictions and the experimentally determined values is good, which indicates that the Monte Carlo model is enough to simulate criticality of the TRIGA-II reactor. (author)

  4. Monte Carlo simulation for soot dynamics

    Zhou Kun

    2012-01-01

    Full Text Available A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  5. Monte carlo simulation for soot dynamics

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  6. SimulRad: a Java interface for a Monte-Carlo simulation code to visualize in 3D the early stages of water radiolysis

    Using a Fortran step-by-step Monte-Carlo simulation code of liquid water radiolysis and the Java programming language, we have developed a Java interface software, called SimulRad. This interface enables a user, in a three-dimensional environment, to either visualize the spatial distribution of all reactive species present in the track of an ionizing particle at a chosen simulation time, or present an animation of the chemical development of the particle track over a chosen time interval (between ∼10-12 and 10-6 s). It also allows one to select a particular radiation-induced cluster of species to view, in fine detail, the chemical reactions that occur between these species

  7. Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates. (special issue devoted to multiple radiation scattering in random media)

  8. Monte Carlo analysis of direct measurements of the thermal eta (.025 eV) for 233U and 235U (LWBR development program)

    Significant inconsistencies have been observed between measured values of eta and of ν, which are related by eta = ν/(1+α). In support of the LWBR program, manganese bath measurements of eta of 233U and 235U employing monoenergetic 0.025 eV neutrons were analyzed using Monte Carlo methods and ENDF-4 cross sections. The calculated (eta*/eta2200) ratios are essentially independent of the values assumed for eta2200. The standard deviation on our calculated values of eta includes Monte Carlo, cross section, and experimental uncertainties. The Monte Carlo analysis was confirmed by calculating measured quantities used by the experimentalists in their reduction of eta* to eta. (4 figures, 12 tables) (U.S.)

  9. Monte Carlo optimization of sample dimensions of an 241Am Be source-based PGNAA setup for water rejects analysis

    Idiri, Z.; Mazrou, H.; Beddek, S.; Amokrane, A.; Azbouche, A.

    2007-07-01

    The present paper describes the optimization of sample dimensions of a 241Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.

  10. Integrated Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (fNL) in the recent CMB data

    We have made a Markov Chain Monte Carlo (MCMC) analysis of primordial non-Gaussianity (fNL) using the WMAP bispectrum and power spectrum. In our analysis, we have simultaneously constrained fNL and cosmological parameters so that the uncertainties of cosmological parameters can properly propagate into the fNL estimation. Investigating the parameter likelihoods deduced from MCMC samples, we find slight deviation from Gaussian shape, which makes a Fisher matrix estimation less accurate. Therefore, we have estimated the confidence interval of fNL by exploring the parameter likelihood without using the Fisher matrix. We find that the best-fit values of our analysis make a good agreement with other results, but the confidence interval is slightly different

  11. A Monte Carlo study of an energy-weighted algorithm for radionuclide analysis with a plastic scintillation detector

    Nuisance and false alarms due to naturally occurring radioactive material (NORM) are major problems facing radiation portal monitors (RPMs) for the screening of illicit radioactive materials in airports and ports. Based on energy-weighted counts, we suggest an algorithm that distinguishes radioactive nuclides with a plastic scintillation detector that has poor energy resolution. Our simulation study, using a Monte Carlo method, demonstrated that man-made radionuclides can be separated from NORM by using a conventional RPM. - Highlights: • Radiation portal monitor using plastic scintillator was modeled and the energy spectra of six radionuclides were assessed. • Energy-weighted algorithm which enables radionuclide analysis with plastic scintillator was suggested and evaluated. • The cases of moving and shielding effect were evaluated and simultaneous radionuclide identification was carried out. • Analysis of the simulated spectra with suggested method shows clear results to enable the radionuclide identification

  12. Analysis of the KANT experiment on beryllium using TRIPOLI-4 Monte Carlo code

    Beryllium is an important material in fusion technology for multiplying neutrons in blankets. However, beryllium nuclear data are differently presented in modern nuclear data evaluations. Recent investigations with the TRIPOLI-4 Monte Carlo simulation of the tritium breeding ratio (TBR) demonstrated that beryllium reaction data are the main source of the calculation uncertainties between ENDF/B-VII.0 and JEFF-3.1. To clarify the calculation uncertainties from data libraries on beryllium, in this study TRIPOLI-4 calculations of the Karlsruhe Neutron Transmission (KANT) experiment have been performed by using ENDF/B-VII.0 and new JEFF-3.1.1 data libraries. The KANT Experiment on beryllium has been used to validate neutron transport codes and nuclear data libraries. An elaborated KANT experiment benchmark has been compiled and published in the NEA/SINBAD database and it has been used as reference in the present work. The neutron multiplication in bulk beryllium assemblies was considered with a central D-T neutron source. Neutron leakage spectra through the 5, 10, and 17 cm thick spherical beryllium shells were calculated and five-group partial leakage multiplications were reported and discussed. In general, improved C/E ratios on neutron leakage multiplications have been obtained. Both ENDF/B-VII.0 and JEFF-3.1.1 beryllium data libraries of TRIPOLI-4 are acceptable now for fusion neutronics calculations.

  13. Monte Carlo Simulation of the EXO Gaseous Xenon Time Projection Chamber and Neural Network Analysis

    Leonard, Francois

    Neutrinoless double beta decay has attracted much interest since its observation would reveal the neutrino masses and determine the Majorana nature of the particle. EXO is among the next generation of experiments dedicated to the search for this phenomenon. A part of the collaboration is developing a gas phase time projection chamber prototype to study the performance of this technique for measuring the half-life of neutrinoless double beta decay in 136Xe. A Monte Carlo simulation of this prototype has been developed using the Geant4 toolkit and the Garfield and Maxwell programs to simulate ionizing events in the detector, the production and propagation of the scintillation and electroluminescence signals and their distribution on CsI photocathodes. The simulation was used to study the uniformity of light deposition on the photocathodes, the effect of the natural gamma background radiation on the detector and its response to calibration gamma sources. Furthermore, data produced with this simulation were analyzed with a neural network algorithm using the multi-layer perceptron class implemented in ROOT. The performance of this algorithm was studied for vertex reconstruction of ionizing events in the detector as well as for classification of tracks for background rejection.

  14. Analysis of probabilistic short run marginal cost using Monte Carlo method

    Gutierrez-Alcaraz, G.; Navarrete, N.; Tovar-Hernandez, J.H.; Fuerte-Esquivel, C.R. [Inst. Tecnologico de Morelia, Michoacan (Mexico). Dept. de Ing. Electrica y Electronica; Mota-Palomino, R. [Inst. Politecnico Nacional (Mexico). Escuela Superior de Ingenieria Mecanica y Electrica

    1999-11-01

    The structure of the Electricity Supply Industry is undergoing dramatic changes to provide new services options. The main aim of this restructuring is allowing generating units the freedom of selling electricity to anybody they wish at a price determined by market forces. Several methodologies have been proposed in order to quantify different costs associated with those new services offered by electrical utilities operating under a deregulated market. The new wave of pricing is heavily influenced by economic principles designed to price products to elastic market segments on the basis of marginal costs. Hence, spot pricing provides the economic structure for many of new services. At the same time, the pricing is influenced by uncertainties associated to the electric system state variables which defined its operating point. In this paper, nodal probabilistic short run marginal costs are calculated, considering as random variables the load, the production cost and availability of generators. The effect of the electrical network is evaluated taking into account linearized models. A thermal economic dispatch is used to simulate each operational condition generated by Monte Carlo method on small fictitious power system in order to assess the effect of the random variables on the energy trading. First, this is carry out by introducing each random variable one by one, and finally considering the random interaction of all of them.

  15. Analysis of carbon deposition on the first wall of LHD by Monte Carlo simulation

    Deposition of impurities on surfaces of plasma confinement devices is one of essential issues in present devices and also future fusion devices. In the Large Helical Device (LHD), it is necessary to reveal fundamental characteristics of impurity transport and deposition by simulation studies along with experimental studies. In the present paper, simulation scheme of carbon deposition on the first wall of LHD and results are discussed. The geometry of the LHD divertor and the configuration of the plasma are newly implemented to the Monte Carlo code ERO. The profiles of the background plasma is calculated numerically by a 1D two-fluid model along a magnetic field line. Spatial distributions of the carbon impurities are investigated for a typical set of plasma parameters in LHD. The simulation results indicate that the deposition is caused by neutral carbon particles from two facing divertor plates. The divertor opposite to the first wall makes less contributions than the adjacent one because of the ionization in the divertor plasma. Chemically sputtered impurities cause more deposition near the divertor than physical ones because atomic processes of methane molecules lead to isotropic particle velocities (copyright 2010 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  16. Economic analysis using Monte Carlo simulation on Xs reservoir Badak field east Kalimantan

    Badak field, located in the delta of mahakam river, in east kalimantan, is a gas producer. the field was found in 1972 by VICO. Badak field is the main gas supplier to bontang LNG and gas is exported to japan, south korea and taiwan, as well as utilized for the main feed to the east kalimantan fertilizer plant. To provide the gas demand, field development as well as exploration wells are continued. on these exploration wells, gas in place determination, gas production rate as well as economic evaluation play on important role. the effect of altering gas production rate to net present value and also the effect of altering discounted factor to the rate of return curve using monte carlo simulation is presented on this paper. based on the simulation results it is obtained that the upper limit of the initial gas in place is 1.82 BSCF, the lower limit is 0.27 BSCF and the most likely million US $ with a rate of return ranges from - 30 to 33.5 percent

  17. Monte Carlo analysis of the Neutron Standards Laboratory of the CIEMAT

    By means of Monte Carlo methods was characterized the neutrons field produced by calibration sources in the Neutron Standards Laboratory of the Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT). The laboratory has two neutron calibration sources: 241AmBe and 252Cf which are stored in a water pool and are placed on the calibration bench using controlled systems at distance. To characterize the neutrons field was built a three-dimensional model of the room where it was included the stainless steel bench, the irradiation table and the storage pool. The sources model included double encapsulated of steel, as cladding. With the purpose of determining the effect that produces the presence of the different components of the room, during the characterization the neutrons spectra, the total flow and the rapidity of environmental equivalent dose to 100 cm of the source were considered. The presence of the walls, floor and ceiling of the room is causing the most modification in the spectra and the integral values of the flow and the rapidity of environmental equivalent dose. (Author)

  18. Comparison of Sensitivity Analysis Techniques in Monte Carlo Codes for Multi-Region Criticality Calculations

    Recently, sensitivity and uncertainty (S/U) techniques have been used to determine the area of applicability (AOA) of critical experiments used for code and data validation. These techniques require the computation of energy-dependent sensitivity coefficients for multiple reaction types for every nuclide in each system included in the validation. The sensitivity coefficients, as used for this application, predict the relative change in the system multiplication factor due to a relative change in a given cross-section data component or material number density. Thus, a sensitivity coefficient, S, for some macroscopic cross section, Σ, is expressed as S = Σ/k ∂k/∂Σ, where k is the effective neutron multiplication factor for the system. The sensitivity coefficient for the density of a material is equivalent to that of the total macroscopic cross section. Two distinct techniques have been employed in Monte Carlo radiation transport codes for the computation of sensitivity coefficients. The first, and most commonly employed, is the differential sampling technique. The second is the adjoint-based perturbation theory approach. This paper briefly describes each technique and presents the results of a simple test case, pointing out discrepancies in the computed results and proposing a remedy to these discrepancies

  19. Markov chain Monte Carlo based analysis of post-translationally modified VDAC1 gating kinetics

    Shivendra eTewari

    2015-01-01

    Full Text Available The voltage-dependent anion channel (VDAC is the main conduit for permeation of solutes (including nucleotides and metabolites of up to 5 kDa across the mitochondrial outer membrane (MOM. Recent studies suggest that VDAC activity is regulated via post-translational modifications (PTMs. Yet the nature and effect of these modifications is not understood. Herein, single channel currents of wild-type, nitrosated and phosphorylated VDAC are analyzed using a generalized continuous-time Markov chain Monte Carlo (MCMC method. This developed method describes three distinct conducting states (open, half-open, and closed of VDAC1 activity. Lipid bilayer experiments are also performed to record single VDAC activity under un-phosphorylated and phosphorylated conditions, and are analyzed using the developed stochastic search method. Experimental data show significant alteration in VDAC gating kinetics and conductance as a result of PTMs. The effect of PTMs on VDAC kinetics is captured in the parameters associated with the identified Markov model. Stationary distributions of the Markov model suggests that nitrosation of VDAC not only decreased its conductance but also significantly locked VDAC in a closed state. On the other hand, stationary distributions of the model associated with un-phosphorylated and phosphorylated VDAC suggest a reversal in channel conformation from relatively closed state to an open state. Model analyses of the nitrosated data suggest that faster reaction of nitric oxide with Cys-127 thiol group might be responsible for the biphasic effect of nitric oxide on basal VDAC conductance.

  20. Fundamentals of Monte Carlo

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of Monte Carlo. Welcome to Los Alamos, the birthplace of “Monte Carlo” for computational physics. Stanislaw Ulam, John von Neumann, and Nicholas Metropolis are credited as the founders of modern Monte Carlo methods. The name “Monte Carlo” was chosen in reference to the Monte Carlo Casino in Monaco (purportedly a place where Ulam’s uncle went to gamble). The central idea (for us) – to use computer-generated “random” numbers to determine expected values or estimate equation solutions – has since spread to many fields. "The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than “abstract thinking” might not be to lay it out say one hundred times and simply observe and count the number of successful plays... Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations." - Stanislaw Ulam.

  1. Approach of technical decision-making by element flow analysis and Monte-Carlo simulation of municipal solid waste stream

    TIAN Bao-guo; SI Ji-tao; ZHAO Yan; WANG Hong-tao; HAO Ji-ming

    2007-01-01

    This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases.

  2. Approach of technical decision-making by element flow analysis and Monte-Carlo simulation of municipal solid waste stream.

    Tian, Bao-Guo; Si, Ji-Tao; Zhao, Yan; Wang, Hong-Tao; Hao, Ji-Ming

    2007-01-01

    This paper deals with the procedure and methodology which can be used to select the optimal treatment and disposal technology of municipal solid waste (MSW), and to provide practical and effective technical support to policy-making, on the basis of study on solid waste management status and development trend in China and abroad. Focusing on various treatment and disposal technologies and processes of MSW, this study established a Monte-Carlo mathematical model of cost minimization for MSW handling subjected to environmental constraints. A new method of element stream (such as C, H, O, N, S) analysis in combination with economic stream analysis of MSW was developed. By following the streams of different treatment processes consisting of various techniques from generation, separation, transfer, transport, treatment, recycling and disposal of the wastes, the element constitution as well as its economic distribution in terms of possibility functions was identified. Every technique step was evaluated economically. The Mont-Carlo method was then conducted for model calibration. Sensitivity analysis was also carried out to identify the most sensitive factors. Model calibration indicated that landfill with power generation of landfill gas was economically the optimal technology at the present stage under the condition of more than 58% of C, H, O, N, S going to landfill. Whether or not to generate electricity was the most sensitive factor. If landfilling cost increases, MSW separation treatment was recommended by screening first followed with incinerating partially and composting partially with residue landfilling. The possibility of incineration model selection as the optimal technology was affected by the city scale. For big cities and metropolitans with large MSW generation, possibility for constructing large-scale incineration facilities increases, whereas, for middle and small cities, the effectiveness of incinerating waste decreases. PMID:17915696

  3. Mass flow rate sensitivity and uncertainty analysis in natural circulation boiling water reactor core from Monte Carlo simulations

    Our aim was to evaluate the sensitivity and uncertainty of mass flow rate in the core on the performance of natural circulation boiling water reactor (NCBWR). This analysis was carried out through Monte Carlo simulations of sizes up to 40,000, and the size, i.e., repetition of 25,000 was considered as valid for routine applications. A simplified boiling water reactor (SBWR) was used as an application example of Monte Carlo method. The numerical code to simulate the SBWR performance considers a one-dimensional thermo-hydraulics model along with non-equilibrium thermodynamics and non-homogeneous flow approximation, one-dimensional fuel rod heat transfer. The neutron processes were simulated with a point reactor kinetics model with six groups of delayed neutrons. The sensitivity was evaluated in terms of 99% confidence intervals of the mean to understand the range of mean values that may represent the entire statistical population of performance variables. The regression analysis with mass flow rate as the predictor variable showed statistically valid linear correlations for both neutron flux and fuel temperature and quadratic relationship for the void fraction. No statistically valid correlation was observed for the total heat flux as a function of the mass flow rate although heat flux at individual nodes was positively correlated with this variable. These correlations are useful for the study, analysis and design of any NCBWR. The uncertainties were propagated as follows: for 10% change in the mass flow rate in the core, the responses for neutron power, total heat flux, average fuel temperature and average void fraction changed by 8.74%, 7.77%, 2.74% and 0.58%, respectively.

  4. Prediction and analysis of the time and energy resolution of scintillation-detectors by Monte-Carlo simulations

    A Monte-Carlo model for the emergence of scintillation-detector-signals is presented that allows for the prediction of certain scintillator-photomultiplier combinations' time and energy resolutions while relying primarily on their basic data-sheet properties like light yield, decay time, quantum efficiency, and transit time spread. At the same time the model provides a deeper understanding of the performance limiting factors and stimulates the development of improved methods for the analysis of detector output signals. The simulation results are compared to high-speed digitizer measurements of signals from a number of widely used scintillation materials like LYSO, BaF2, LaBr3, NaI, and others.

  5. Use of Monte Carlo simulation for computational analysis of critical systems on IPPE's facility addressing needs of nuclear safety

    The critical facility BFS-1 critical facility was built at the Institute of Physics and Power Engineering (Obninsk, Russia) for full-scale modeling of fast-reactor cores, blankets, in-vessel shielding, and storage. Whereas BFS-1 is a fast-reactor assembly; however, it is a very flexible assembly that can easily be reconfigured to represent numerous other types of reactor designs. This paper describes specific problems with calculation of evaluation neutron physics characteristics of integral experiments performed on BFS facility. The analysis available integral experiments performed on different critical configuration of BFS facility were performed. Calculations of criticality, central reaction rate ratios, and fission rate distributions were carried out by the MCNP5 Monte-Carlo code with different files of evaluated nuclear data. MCNP calculations with multigroup library with 299 energy groups were also made for comparison with pointwise library calculations. (authors)

  6. Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and Monte Carlo method

    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In the first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have focused on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In the second part, we present a new method based on Monte Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles. (paper)

  7. Development of synthetic velocity - depth damage curves using a Weighted Monte Carlo method and Logistic Regression analysis

    Vozinaki, Anthi Eirini K.; Karatzas, George P.; Sibetheros, Ioannis A.; Varouchakis, Emmanouil A.

    2014-05-01

    Damage curves are the most significant component of the flood loss estimation models. Their development is quite complex. Two types of damage curves exist, historical and synthetic curves. Historical curves are developed from historical loss data from actual flood events. However, due to the scarcity of historical data, synthetic damage curves can be alternatively developed. Synthetic curves rely on the analysis of expected damage under certain hypothetical flooding conditions. A synthetic approach was developed and presented in this work for the development of damage curves, which are subsequently used as the basic input to a flood loss estimation model. A questionnaire-based survey took place among practicing and research agronomists, in order to generate rural loss data based on the responders' loss estimates, for several flood condition scenarios. In addition, a similar questionnaire-based survey took place among building experts, i.e. civil engineers and architects, in order to generate loss data for the urban sector. By answering the questionnaire, the experts were in essence expressing their opinion on how damage to various crop types or building types is related to a range of values of flood inundation parameters, such as floodwater depth and velocity. However, the loss data compiled from the completed questionnaires were not sufficient for the construction of workable damage curves; to overcome this problem, a Weighted Monte Carlo method was implemented, in order to generate extra synthetic datasets with statistical properties identical to those of the questionnaire-based data. The data generated by the Weighted Monte Carlo method were processed via Logistic Regression techniques in order to develop accurate logistic damage curves for the rural and the urban sectors. A Python-based code was developed, which combines the Weighted Monte Carlo method and the Logistic Regression analysis into a single code (WMCLR Python code). Each WMCLR code execution

  8. PDF Weaving - Linking Inventory Data and Monte Carlo Uncertainty Analysis in the Study of how Disturbance Affects Forest Carbon Storage

    Healey, S. P.; Patterson, P.; Garrard, C.

    2014-12-01

    Altered disturbance regimes are likely a primary mechanism by which a changing climate will affect storage of carbon in forested ecosystems. Accordingly, the National Forest System (NFS) has been mandated to assess the role of disturbance (harvests, fires, insects, etc.) on carbon storage in each of its planning units. We have developed a process which combines 1990-era maps of forest structure and composition with high-quality maps of subsequent disturbance type and magnitude to track the impact of disturbance on carbon storage. This process, called the Forest Carbon Management Framework (ForCaMF), uses the maps to apply empirically calibrated carbon dynamics built into a widely used management tool, the Forest Vegetation Simulator (FVS). While ForCaMF offers locally specific insights into the effect of historical or hypothetical disturbance trends on carbon storage, its dependence upon the interaction of several maps and a carbon model poses a complex challenge in terms of tracking uncertainty. Monte Carlo analysis is an attractive option for tracking the combined effects of error in several constituent inputs as they impact overall uncertainty. Monte Carlo methods iteratively simulate alternative values for each input and quantify how much outputs vary as a result. Variation of each input is controlled by a Probability Density Function (PDF). We introduce a technique called "PDF Weaving," which constructs PDFs that ensure that simulated uncertainty precisely aligns with uncertainty estimates that can be derived from inventory data. This hard link with inventory data (derived in this case from FIA - the US Forest Service Forest Inventory and Analysis program) both provides empirical calibration and establishes consistency with other types of assessments (e.g., habitat and water) for which NFS depends upon FIA data. Results from the NFS Northern Region will be used to illustrate PDF weaving and insights gained from ForCaMF about the role of disturbance in carbon

  9. Analysis of the Tandem Calibration Method for Kerma Area Product Meters Via Monte Carlo Simulations

    The IAEA recommends that uncertainties of dosimetric measurements in diagnostic radiology for risk assessment and quality assurance should be less than 7% on the confidence level of 95%. This accuracy is difficult to achieve with kerma area product (KAP) meters currently used in clinics. The reasons range from the high energy dependence of KAP meters to the wide variety of configurations in which KAP meters are used and calibrated. The tandem calibration method introduced by Poeyry, Komppa and Kosunen in 2005 has the potential to make the calibration procedure simpler and more accurate compared to the traditional beam-area method. In this method, two positions of the reference KAP meter are of interest: (a) a position close to the field KAP meter and (b) a position 20 cm above the couch. In the close position, the distance between the two KAP meters should be at least 30 cm to reduce the effect of back scatter. For the other position, which is recommended for the beam-area calibration method, the distance of 70 cm between the KAP meters was used in this study. The aim of this work was to complement existing experimental data comparing the two configurations with Monte Carlo (MC) simulations. In a geometry consisting of a simplified model of the VacuTec 70157 type KAP meter, the MCNP code was used to simulate the kerma area product, PKA, for the two (close and distant) reference planes. It was found that PKA values for the tube voltage of 40 kV were about 2.5% lower for the distant plane than for the close one. For higher tube voltages, the difference was smaller. The difference was mainly caused by attenuation of the X ray beam in air. Since the problem with high uncertainties in PKA measurements is also caused by the current design of X ray machines, possible solutions are discussed. (author)

  10. Monte Carlo photon benchmark problems

    Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems. 8 refs., 5 figs

  11. Analysis of Osiris In-Core Surveillance Dosimetry for Gondole Steel Irradiation Program by Using TRIPOLI-4 Monte Carlo Code

    Lee, Y. K.; Malouch, F.

    2009-08-01

    In order to assess the possibility of swelling of austenitic steels for the core internals of pressurized water reactors (PWR), a multi-year irradiation program, called GONDOLE, is ongoing in the OSIRIS material testing reactor at the CEA-Saclay site. This experiment consists in the irradiation of several density specimens at high temperature (> 350 °C). The first phase of GONDOLE irradiation run was completed in January 2006 after six reactor cycles of twenty days and the surveillance dosimetry results of the first phase were available by the end of 2006. The purpose of this paper is to present the neutron calculation methodology performed for GONDOLE program by using the continuous-energy Monte Carlo 3D-transport code TRDPOLI-4. For the specimens of virgin materials and the dosimeters located at the core mid-plane, the calculation and measurement results of the first phase of irradiation run will be presented. In addition, prediction calculation of helium gas production in the virgin materials will be introduced.

  12. Fundamentals of Monte Carlo

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  13. Contributon Monte Carlo

    The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables

  14. The Monte Carlo method for shielding calculations analysis by MORSE code of a streaming case in the CAORSO BWR power reactor shielding (Italy)

    In the field of shielding, the requirement of radiation transport calculations in severe conditions, characterized by irreducible three-dimensional geometries has increased the use of the Monte Carlo method. The latter has proved to be the only rigorous and appropriate calculational method in such conditions. However, further efforts at optimization are still necessary to render the technique practically efficient, despite recent improvements in the Monte Carlo codes, the progress made in the field of computers and the availability of accurate nuclear data. Moreover, the personal experience acquired in the field and the control of sophisticated calculation procedures are of the utmost importance. The aim of the work which has been carried out is the gathering of all the necessary elements and features that would lead to an efficient utilization of the Monte Carlo method used in connection with shielding problems. The study of the general aspects of the method and the exploitation techniques of the MORSE code, which has proved to be one of the most comprehensive of the Monte Carlo codes, lead to a successful analysis of an actual case. In fact, the severe conditions and difficulties met have been overcome using such a stochastic simulation code. Finally, a critical comparison between calculated and high-accuracy experimental results has allowed the final confirmation of the methodology used by us

  15. Monte Carlo-based interval transformation analysis for multi-criteria decision analysis of groundwater management strategies under uncertain naphthalene concentrations and health risks

    Ren, Lixia; He, Li; Lu, Hongwei; Chen, Yizhong

    2016-08-01

    A new Monte Carlo-based interval transformation analysis (MCITA) is used in this study for multi-criteria decision analysis (MCDA) of naphthalene-contaminated groundwater management strategies. The analysis can be conducted when input data such as total cost, contaminant concentration and health risk are represented as intervals. Compared to traditional MCDA methods, MCITA-MCDA has the advantages of (1) dealing with inexactness of input data represented as intervals, (2) mitigating computational time due to the introduction of Monte Carlo sampling method, (3) identifying the most desirable management strategies under data uncertainty. A real-world case study is employed to demonstrate the performance of this method. A set of inexact management alternatives are considered in each duration on the basis of four criteria. Results indicated that the most desirable management strategy lied in action 15 for the 5-year, action 8 for the 10-year, action 12 for the 15-year, and action 2 for the 20-year management.

  16. 3D coupling of Monte Carlo neutronics and thermal-hydraulics/thermic calculations as a simulation tool for innovative reactor concepts

    Simulations of new reactor designs, such as generation IV concepts, require three dimensional modeling to ensure a sufficiently realistic description for safety analysis. If precise solutions of local physical phenomena (DNBR, cross flow, form factors,...) are to be found then the use of accurate 3D coupled neutronics/thermal-hydraulics codes becomes essential. Moreover, to describe this coupled field with a high level of accuracy requires successive iterations between neutronics and thermal-hydraulics at equilibrium until convergence (power deposits and temperatures must be finely discretized, ex: pin by pin and axial discretization). In this paper we present the development and simulation results of such coupling capabilities using our code MURE (MCNP Utility for Reactor Evolution), a precision code written in C++ which automates the preparation and computation of successive MCNP calculations either for precision burnup and/or thermal-hydraulics/thermic purposes. For the thermal-hydraulics part, the code COBRA is used. It is a sub-channel code that allows steady-state and transient analysis of reactor cores. The goal is a generic, non system-specific code, for both burn-up calculations and safety analysis at any point in the fuel cycle: the eventual trajectory of an accident scenario will be sensitive to the initial distribution of fissile material and neutron poisons in the reactor (axial and radial heterogeneity). The MURE code is open-source, portable and manages all the neutronics and the thermal-hydraulics/thermic calculations in background: control is provided by the MURE interface or the user can interact directly with the codes if desired. MURE automatically builds input files and other necessary data, launches the codes and manages the communication between them. Consequently accurate 3D simulations of power plants on both global and pin level of detail with thermal feedback can be easily performed (radial and axial meshing grids are managed by MURE). A

  17. Monte Carlo simulation of parameter confidence intervals for non-linear regression analysis of biological data using Microsoft Excel.

    Lambert, Ronald J W; Mytilinaios, Ioannis; Maitland, Luke; Brown, Angus M

    2012-08-01

    This study describes a method to obtain parameter confidence intervals from the fitting of non-linear functions to experimental data, using the SOLVER and Analysis ToolPaK Add-In of the Microsoft Excel spreadsheet. Previously we have shown that Excel can fit complex multiple functions to biological data, obtaining values equivalent to those returned by more specialized statistical or mathematical software. However, a disadvantage of using the Excel method was the inability to return confidence intervals for the computed parameters or the correlations between them. Using a simple Monte-Carlo procedure within the Excel spreadsheet (without recourse to programming), SOLVER can provide parameter estimates (up to 200 at a time) for multiple 'virtual' data sets, from which the required confidence intervals and correlation coefficients can be obtained. The general utility of the method is exemplified by applying it to the analysis of the growth of Listeria monocytogenes, the growth inhibition of Pseudomonas aeruginosa by chlorhexidine and the further analysis of the electrophysiological data from the compound action potential of the rodent optic nerve. PMID:21764476

  18. Monte Carlo modelling of TRIGA research reactor

    El Bakkari, B., E-mail: bakkari@gmail.co [Reactor Operating Unit (UCR), National Centre of Sciences, Energy and Nuclear Techniques (CNESTEN/CENM), POB 1382, Rabat (Morocco); ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); Nacir, B. [Reactor Operating Unit (UCR), National Centre of Sciences, Energy and Nuclear Techniques (CNESTEN/CENM), POB 1382, Rabat (Morocco); El Bardouni, T. [ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); El Younoussi, C. [Reactor Operating Unit (UCR), National Centre of Sciences, Energy and Nuclear Techniques (CNESTEN/CENM), POB 1382, Rabat (Morocco); ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); Merroun, O. [ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); Htet, A. [Reactor Technology Unit (UTR), National Centre of Sciences, Energy and Nuclear Techniques (CNESTEN/CENM), POB 1382, Rabat (Morocco); Boulaich, Y. [Reactor Operating Unit (UCR), National Centre of Sciences, Energy and Nuclear Techniques (CNESTEN/CENM), POB 1382, Rabat (Morocco); ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); Zoubair, M.; Boukhal, H. [ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); Chakir, M. [EPTN-LPMR, Faculty of Sciences, Kenitra (Morocco)

    2010-10-15

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucleaires de la Maamora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S({alpha}, {beta}) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file 'up259'. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  19. Monte Carlo modelling of TRIGA research reactor

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucleaires de la Maamora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S(α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file 'up259'. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  20. Monte Carlo modelling of TRIGA research reactor

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  1. Optimization of Monte Carlo simulations

    Bryskhe, Henrik

    2009-01-01

    This thesis considers several different techniques for optimizing Monte Carlo simulations. The Monte Carlo system used is Penelope but most of the techniques are applicable to other systems. The two mayor techniques are the usage of the graphics card to do geometry calculations, and raytracing. Using graphics card provides a very efficient way to do fast ray and triangle intersections. Raytracing provides an approximation of Monte Carlo simulation but is much faster to perform. A program was ...

  2. Quantum Gibbs ensemble Monte Carlo

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of 4He in two dimensions

  3. Monte Carlo analysis of a lateral IBIC experiment on a 4H-SiC Schottky diode

    Olivero, P.; Forneris, J.; Gamarra, P.; Jakšić, M.; Giudice, A. Lo; Manfredotti, C.; Pastuović, Ž.; Skukan, N.; Vittone, E.

    2011-10-01

    The transport properties of a 4H-SiC Schottky diode have been investigated by the ion beam induced charge (IBIC) technique in lateral geometry through the analysis of the charge collection efficiency (CCE) profile at a fixed applied reverse bias voltage. The cross section of the sample orthogonal to the electrodes was irradiated by a rarefied 4 MeV proton microbeam and the charge pulses have been recorded as function of incident proton position with a spatial resolution of 2 μm. The CCE profile shows a broad plateau with CCE values close to 100% occurring at the depletion layer, whereas in the neutral region, the exponentially decreasing profile indicates the dominant role played by the diffusion transport mechanism. Mapping of charge pulses was accomplished by a novel computational approach, which consists in mapping the Gunn's weighting potential by solving the electrostatic problem by finite element method and hence evaluating the induced charge at the sensing electrode by a Monte Carlo method. The combination of these two computational methods enabled an exhaustive interpretation of the experimental profiles and allowed an accurate evaluation both of the electrical characteristics of the active region (e.g. electric field profiles) and of basic transport parameters (i.e. diffusion length and minority carrier lifetime).

  4. An Analysis on the Characteristic of Multi-response CADIS Method for the Monte Carlo Radiation Shielding Calculation

    It uses the deterministic method to calculate adjoint fluxes for the decision of the parameters used in the variance reductions. This is called as hybrid Monte Carlo method. The CADIS method, however, has a limitation to reduce the stochastic errors of all responses. The Forward Weighted CADIS (FW-CADIS) was introduced to solve this problem. To reduce the overall stochastic errors of the responses, the forward flux is used. In the previous study, the Multi-Response CADIS (MR-CAIDS) method was derived for minimizing sum of each squared relative error. In this study, the characteristic of the MR-CADIS method was evaluated and compared with the FW-CADIS method. In this study, how the CADIS, FW-CADIS, and MR-CADIS methods are applied to optimize and decide the parameters used in the variance reduction techniques was analyzed. The MR-CADIS Method uses a technique that the sum of squared relative error in each tally region was minimized to achieve uniform uncertainty. To compare the simulation efficiency of the methods, a simple shielding problem was evaluated. Using FW-CADIS method, it was evaluated that the average of the relative errors was minimized; however, MR-CADIS method gives a lowest variance of the relative errors. Analysis shows that, MR-CADIS method can efficiently and uniformly reduce the relative error of the plural response problem than FW-CADIS method

  5. Analysis of the neutrons dispersion in a semi-infinite medium based in transport theory and the Monte Carlo method

    In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., μο=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)

  6. Markov Chain Monte Carlo Joint Analysis of Chandra X-Ray Imaging Spectroscopy and Sunyaev-Zel'dovich Effect Data

    Bonamente, Massimillano; Joy, Marshall K.; Carlstrom, John E.; Reese, Erik D.; LaRoque, Samuel J.

    2004-01-01

    X-ray and Sunyaev-Zel'dovich effect data can be combined to determine the distance to galaxy clusters. High-resolution X-ray data are now available from Chandra, which provides both spatial and spectral information, and Sunyaev-Zel'dovich effect data were obtained from the BIMA and Owens Valley Radio Observatory (OVRO) arrays. We introduce a Markov Chain Monte Carlo procedure for the joint analysis of X-ray and Sunyaev- Zel'dovich effect data. The advantages of this method are the high computational efficiency and the ability to measure simultaneously the probability distribution of all parameters of interest, such as the spatial and spectral properties of the cluster gas and also for derivative quantities such as the distance to the cluster. We demonstrate this technique by applying it to the Chandra X-ray data and the OVRO radio data for the galaxy cluster A611. Comparisons with traditional likelihood ratio methods reveal the robustness of the method. This method will be used in follow-up paper to determine the distances to a large sample of galaxy cluster.

  7. Monte Carlo analysis of a lateral IBIC experiment on a 4H-SiC Schottky diode

    Olivero, P; Gamarra, P; Jaksic, M; Giudice, A Lo; Manfredotti, C; Pastuovic, Z; Skukan, N; Vittone, E

    2016-01-01

    The transport properties of a 4H-SiC Schottky diode have been investigated by the Ion Beam Induced Charge (IBIC) technique in lateral geometry through the analysis of the charge collection efficiency (CCE) profile at a fixed applied reverse bias voltage. The cross section of the sample orthogonal to the electrodes was irradiated by a rarefied 4 MeV proton microbeam and the charge pulses have been recorded as function of incident proton position with a spatial resolution of 2 um. The CCE profile shows a broad plateau with CCE values close to 100% occurring at the depletion layer, whereas in the neutral region, the exponentially decreasing profile indicates the dominant role played by the diffusion transport mechanism. Mapping of charge pulses was accomplished by a novel computational approach, which consists in mapping the Gunn's weighting potential by solving the electrostatic problem by finite element method and hence evaluating the induced charge at the sensing electrode by a Monte Carlo method. The combina...

  8. A Monte Carlo (MC) based individual calibration method for in vivo x-ray fluorescence analysis (XRF)

    Hansson, Marie; Isaksson, Mats

    2007-04-01

    X-ray fluorescence analysis (XRF) is a non-invasive method that can be used for in vivo determination of thyroid iodine content. System calibrations with phantoms resembling the neck may give misleading results in the cases when the measurement situation largely differs from the calibration situation. In such cases, Monte Carlo (MC) simulations offer a possibility of improving the calibration by better accounting for individual features of the measured subjects. This study investigates the prospects of implementing MC simulations in a calibration procedure applicable to in vivo XRF measurements. Simulations were performed with Penelope 2005 to examine a procedure where a parameter, independent of the iodine concentration, was used to get an estimate of the expected detector signal if the thyroid had been measured outside the neck. An attempt to increase the simulation speed and reduce the variance by exclusion of electrons and by implementation of interaction forcing was conducted. Special attention was given to the geometry features: analysed volume, source-sample-detector distances, thyroid lobe size and position in the neck. Implementation of interaction forcing and exclusion of electrons had no obvious adverse effect on the quotients while the simulation time involved in an individual calibration was low enough to be clinically feasible.

  9. A Proposal on the Method of Real Uncertainty Estimation in Two Step Monte Carlo Simulation for Residual Radiation Analysis

    There are many problems related to multi-step Monte Carlo (MC) calculation. Surface Source Reading (SSR) and Surface Source Writing (SSW) options in MCNP, MC depletion calculation, accelerator shielding analysis using secondary particle source term calculation, and residual particle transport calculation caused by activation are the examples of the simulations. In these problems, the average values estimated from the MC result in the previous step are used as sources of MC simulation in the next step. Hence, the uncertainties of the results in previous step are usually not considered for calculating that of next step MC simulation even though they are propagated as the stepwise progression. In this study, a new method using the forward-adjoint calculation and the union tally is proposed for the estimation of real uncertainty. For the activation benchmark problems the responses and real uncertainties were estimated by using the proposed method. And, the results were compared with those estimated by the brute force technique and the adjoint-based approach. The result shows that the proposed approach gives an accurate result comparing with the reference results

  10. A MARKOV CHAIN MONTE CARLO ALGORITHM FOR ANALYSIS OF LOW SIGNAL-TO-NOISE COSMIC MICROWAVE BACKGROUND DATA

    We present a new Markov Chain Monte Carlo (MCMC) algorithm for cosmic microwave background (CMB) analysis in the low signal-to-noise regime. This method builds on and complements the previously described CMB Gibbs sampler, and effectively solves the low signal-to-noise inefficiency problem of the direct Gibbs sampler. The new algorithm is a simple Metropolis-Hastings sampler with a general proposal rule for the power spectrum, C l, followed by a particular deterministic rescaling operation of the sky signal, s. The acceptance probability for this joint move depends on the sky map only through the difference of χ2 between the original and proposed sky sample, which is close to unity in the low signal-to-noise regime. The algorithm is completed by alternating this move with a standard Gibbs move. Together, these two proposals constitute a computationally efficient algorithm for mapping out the full joint CMB posterior, both in the high and low signal-to-noise regimes.

  11. RunMC - an object-oriented analysis framework for Monte Carlo simulation of high-energy particle collisions

    Chekanov, S

    2005-01-01

    RunMC is an object-oriented framework aimed to generate and to analyse high-energy collisions of elementary particles using Monte Carlo simulations. This package, being based on C++ adopted by CERN as the main programming language for the LHC experiments, provides a common interface to different Monte Carlo models using modern physics libraries. Physics calculations (projects) can easily be loaded and saved as external modules. This simplifies the development of complicated calculations for high energy physics in large collaborations.This desktop program is open-source licensed and is available on the LINUX and Windows/Cygwin platforms.

  12. A primer on applying Monte Carlo simulation, real options analysis, knowledge value added, forecasting, and portfolio optimization / by Johnathan Mun, Thomas Housel.

    Mun, Johnathan; Housel, Thomas

    2010-01-01

    In this quick primer, advanced quantitative risk-based concepts will be introduced--namely, the hands-on applications of Monte Carlo simulation, real options analysis, stochastic forecasting, portfolio optimization, and knowledge value added. These methodologies rely on common metrics and existing techniques (e.g., return on investment, discounted cash flow, cost-based analysis, and so forth), and complement these traditional techniques by pushing the envelope of analytics, not replacing them...

  13. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence

    Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung

    2016-01-01

    Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The “missing heritability” has been suggested to be due to lack of studies focused on epistasis, also called gene–gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method for increasing statistical power. However, sufficient detailed information is difficult to obtain. A previous study employed a meta-regression-based method to detect epistasis, but it faced the challenge of inconsistent estimates. Here, we describe a Markov chain Monte Carlo-based method, called “Epistasis Test in Meta-Analysis” (ETMA), which uses genotype summary data to obtain consistent estimates of epistasis effects in meta-analysis. We defined a series of conditions to generate simulation data and tested the power and type I error rates in ETMA, individual data analysis and conventional meta-regression-based method. ETMA not only successfully facilitated consistency of evidence but also yielded acceptable type I error and higher power than conventional meta-regression. We applied ETMA to three real meta-analysis data sets. We found significant gene–gene interactions in the renin–angiotensin system and the polycyclic aromatic hydrocarbon metabolism pathway, with strong supporting evidence. In addition, glutathione S-transferase (GST) mu 1 and theta 1 were confirmed to exert independent effects on cancer. We concluded that the application of ETMA to real meta-analysis data was successful. Finally, we developed an R package, etma, for the detection of epistasis in meta-analysis [etma is available via the Comprehensive R Archive Network (CRAN) at https://cran.r-project.org/web/packages/etma/index.html]. PMID:27045371

  14. Diffuse X-ray scattering from benzil, C14H10O2: analysis via automatic refinement of a Monte Carlo model

    Full text: A recently developed method for fitting a Monte Carlo computer simulation model to observed single crystal diffuse X-ray scattering data has been used to study the diffuse scattering in benzil. The analysis has shown that the diffuse lines, that feature so prominently in the observed diffraction patterns, are due to strong longitudinal displacement correlations transmitted from molecule to molecule via a network of contacts involving hydrogen bonding

  15. Degeneracies in sky localization determination from a spinning coalescing binary through gravitational wave observations: a Markov-chain Monte Carlo analysis for two detectors

    Gravitational-wave signals from inspirals of binary compact objects (black holes and neutron stars) are primary targets of the ongoing searches by ground-based gravitational-wave interferometers (LIGO, Virgo and GEO-600). We present parameter-estimation simulations for inspirals of black-hole-neutron-star binaries using Markov-chain Monte Carlo methods. As a specific example of the power of these methods, we consider source localization in the sky and analyze the degeneracy in it when data from only two detectors are used. We focus on the effect that the black-hole spin has on the localization estimation. We also report on a comparative Markov-chain Monte Carlo analysis with two different waveform families, at 1.5 and 3.5 post-Newtonian orders.

  16. Prompt γ-ray activation analysis of Martian analogues at the FRM II neutron reactor and the verification of a Monte Carlo planetary radiation environment model

    Planetary radiation environment modelling is important to assess the habitability of a planetary body. It is also useful when interpreting the γ-ray data produced by natural emissions from radioisotopes or prompt γ-ray activation analysis. γ-ray spectra acquired in orbit or in-situ by a suitable detector can be converted into meaningful estimates of the concentration of certain elements on the surface of a planet. This paper describes the verification of a Monte Carlo model developed using the MCNPX code at University of Leicester. The model predicts the performance of a geophysical package containing a γ-ray spectrometer operating at a depth of up to 5 m. The experimental verification of the Monte Carlo model was performed at the FRM II facility in Munich, Germany. The paper demonstrates that the model is in good agreement with the experimental data and can be used to model the performance of an in-situ γ-ray spectrometer.

  17. Monte Carlo simulations in medical technology- II. Application of Monte Carlo procedure to medical technology

    Methods for Monte Carlo procedure in radiation measurement by SPECT (single photon emission computed tomography) and 3-D PET (3-dimensional positron emission tomography) are described together with its application to develop and optimize the scattering correction method in 201Tl-SPECT. In the medical technology, the Monte Carlo simulation makes it possible to quantify the behavior of a photon like scattering and absorption, and which can be performed by the use of EGS4 simulation code consisting from Step A - E. With the method, data collection procedures of the diagnostic equipments for nuclear medicine and application to develop the transmission radiation source for SPECT are described. Precision of the scattering correction method is also evaluated in the SPECT by the Monte Carlo simulation. The simulation is a useful tool for evaluating the behavior of radiation in the human body which can not be actually measured. (K.H.)

  18. Monte Carlo simulation of neutron scattering instruments

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width

  19. Monte Carlo techniques

    The course of ''Monte Carlo Techniques'' will try to give a general overview of how to build up a method based on a given theory, allowing you to compare the outcome of an experiment with that theory. Concepts related with the construction of the method, such as, random variables, distributions of random variables, generation of random variables, random-based numerical methods, will be introduced in this course. Examples of some of the current theories in High Energy Physics describing the e+e- annihilation processes (QED, Electro-Weak, QCD) will also be briefly introduced. A second step in the employment of this method is related to the detector. The interactions that a particle could have along its way, through the detector as well as the response of the different materials which compound the detector will be quoted in this course. An example of detector at LEP era, in which these techniques are being applied, will close the course. (orig.)

  20. Comparative and sensitive analysis for parabolic trough solar collectors with a detailed Monte Carlo ray-tracing optical model

    Highlights: • It is to present comparative and sensitive analysis for PTCs with the MCRT method. • A detailed PTC optical model was developed based on a novel unified MCRT model. • Reference data determined by the divergence effect is useful to design a better PTC. • Different PTCs have different levels of sensitivity to different optical errors. • There are no contradictions between accuracy requirements of different parameters. - Abstract: This paper presents the numerical results of the comparative and sensitive analysis for different parabolic trough solar collector (PTC) systems under different operating conditions, expecting to optimize the PTC system of better comprehensive characteristics and optical performance or to evaluate the accuracy required for future constructions. A more detailed optical model was developed from a previously proposed unified Monte Carlo ray-tracing (MCRT) model. Numerical results were compared with the reference data and good agreements were obtained, proving that the model and the numerical results are reliable. Then the comparative and sensitive analyses for different PTC systems or different geometric parameters under different possible operating conditions were carried out by this detailed optical model. From the numerical results it is revealed that the ideal comprehensive characteristics and optical performance of the PTC systems are very different from some critical points determined by the divergence phenomenon of the non-parallel solar beam, which can also be well explained by the theoretical analysis results. For different operating conditions, the PTC systems of different geometric parameters have different levels of sensitivity to different optical errors, but the optical accuracy requirements from different geometric parameters of the whole PTC system are always consistent

  1. Monte Carlo Methods in Physics

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  2. Sensitivity Analysis of the Sheet Metal Stamping Processes Based on Inverse Finite Element Modeling and Monte Carlo Simulation

    Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box

  3. Assessment of parameter uncertainty in hydrological model using a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis method

    Zhang, Junlong; Li, Yongping; Huang, Guohe; Chen, Xi; Bao, Anming

    2016-07-01

    Without a realistic assessment of parameter uncertainty, decision makers may encounter difficulties in accurately describing hydrologic processes and assessing relationships between model parameters and watershed characteristics. In this study, a Markov-Chain-Monte-Carlo-based multilevel-factorial-analysis (MCMC-MFA) method is developed, which can not only generate samples of parameters from a well constructed Markov chain and assess parameter uncertainties with straightforward Bayesian inference, but also investigate the individual and interactive effects of multiple parameters on model output through measuring the specific variations of hydrological responses. A case study is conducted for addressing parameter uncertainties in the Kaidu watershed of northwest China. Effects of multiple parameters and their interactions are quantitatively investigated using the MCMC-MFA with a three-level factorial experiment (totally 81 runs). A variance-based sensitivity analysis method is used to validate the results of parameters' effects. Results disclose that (i) soil conservation service runoff curve number for moisture condition II (CN2) and fraction of snow volume corresponding to 50% snow cover (SNO50COV) are the most significant factors to hydrological responses, implying that infiltration-excess overland flow and snow water equivalent represent important water input to the hydrological system of the Kaidu watershed; (ii) saturate hydraulic conductivity (SOL_K) and soil evaporation compensation factor (ESCO) have obvious effects on hydrological responses; this implies that the processes of percolation and evaporation would impact hydrological process in this watershed; (iii) the interactions of ESCO and SNO50COV as well as CN2 and SNO50COV have an obvious effect, implying that snow cover can impact the generation of runoff on land surface and the extraction of soil evaporative demand in lower soil layers. These findings can help enhance the hydrological model

  4. ANALYSIS OF MONTE CARLO SIMULATION SAMPLING TECHNIQUES ON SMALL SIGNAL STABILITY OF WIND GENERATOR- CONNECTED POWER SYSTEM

    TEMITOPE RAPHAEL AYODELE

    2016-04-01

    Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.

  5. Radiation shielding analysis of a spent fuel transport cask with an actual configuration model using the Monte Carlo method - comparison with the discrete ordinates Sn method

    In order to demonstrate the features of Monte Carlo method, in comparison with the two-dimensional discrete ordinates Sn method, detailed modeling of the canister containing the fuel basket with 14 spent fuel assemblies, supplement shields located around the lower nozzles of the fuels, and the cooling fins attached on the cask body of the NFT-14P cask are performed using the Monte Carlo code MCNP 4C. Furthermore, the water level in the canister is assimilated into the present MCNP 4C calculations. For more precise modeling of the canister, the generating points of gamma rays and neutrons are simulated accurately from the fuel assemblies installed in it. The supplement shields located around the lower nozzles of the fuels are designed to be effective especially for the activation 60Co gamma rays, and the cooling fins for gamma rays in particular. As predicated, compared with the DOT 3.5 calculations, the total dose-equivalent rates with the actual configurations are reduced to approximately 30% at 1m from the upper side surface and 85% at 1m from the lower side surface, respectively. Accordingly, the employment of detailed models for the Monte Carlo calculations is essential to accomplish more reasonable shielding design of a spent fuel transport cask and an interim storage cask. Quality of the actual configuration model of the canister containing the fuel basket with 12 spent fuel assemblies has already been demonstrated by the Monte Carlo analysis with MCNP 4B, in comparison with the measured dose-equivalent rates around the TN-12A cask

  6. BOOTSTRAPPING AND MONTE CARLO METHODS OF POWER ANALYSIS USED TO ESTABLISH CONDITION CATEGORIES FOR BIOTIC INDICES

    Biotic indices have been used ot assess biological condition by dividing index scores into condition categories. Historically the number of categories has been based on professional judgement. Alternatively, statistical methods such as power analysis can be used to determine the ...

  7. An Analysis of Spherical Particles Distribution Randomly Packed in a Medium for the Monte Carlo Implicit Modeling

    In this study, as a preliminary study to develop an implicit method having high accuracy, the distribution characteristics of spherical particles were evaluated by using explicit modeling techniques in various volume packing fractions. This study was performed to evaluate implicitly simulated distribution of randomly packed spheres in a medium. At first, an explicit modeling method to simulate random packed spheres in a hexahedron medium was proposed. The distributed characteristics of lp and rp, which are used in the particle position sampling, was estimated. It is analyzed that the use of the direct exponential distribution, which is generally used in the implicit modeling, can cause the distribution bias of the spheres. It is expected that the findings in this study can be utilized for improving the accuracy in using the implicit method. Spherical particles, which are randomly distributed in medium, are utilized for the radiation shields, fusion reactor blanket, fuels of VHTR reactors. Due to the difficulty on the simulation of the stochastic distribution, Monte Carlo (MC) method has been mainly considered as the tool for the analysis of the particle transport. For the MC modeling of the spherical particles, three methods are known; repeated structure, explicit modeling, and implicit modeling. Implicit method (called as the track length sampling method) is a modeling method that is the sampling based modeling technique of each spherical geometry (or track length of the sphere) during the MC simulation. Implicit modeling method has advantages in high computational efficiency and user convenience. However, it is noted that the implicit method has lower modeling accuracy in various finite mediums

  8. Factor Analysis with Ordinal Indicators: A Monte Carlo Study Comparing DWLS and ULS Estimation

    Forero, Carlos G.; Maydeu-Olivares, Alberto; Gallardo-Pujol, David

    2009-01-01

    Factor analysis models with ordinal indicators are often estimated using a 3-stage procedure where the last stage involves obtaining parameter estimates by least squares from the sample polychoric correlations. A simulation study involving 324 conditions (1,000 replications per condition) was performed to compare the performance of diagonally…

  9. Analysis of Unknown Radioactive Samples Spectra by Using (K0IAEA) Monte Carlo Code

    Studying and measuring of gamma-ray energies emitted from radionuclides are very important field of radiation physics, and have many applications in different fields of sciences such as in the study of nuclear structure, identification of radioisotopes and their activities, estimating absorbed doses, and determination of nuclear reaction cross sections. The new developments in gamma-ray spectrometry have expanded and have been applied in diverse fields such as astrophysics and medical therapy for which highly accurate measurements of gamma-rays are needed. This has been achieved by way of tracing the interaction of gamma-rays in the semiconductor and scintillation detectors and the energy deposited within. This thesis is concerned with the detector Full Energy Peak Efficiency (FEPE). The peak efficiency assumes only those interactions that deposit the full energy of the incident radiation and counted in a differential pulse height distribution. These full energy events are normally evidence by a peak that appears at the highest end of the spectrum. Events that deposit only part of the incident radiation energy then will appear farther to the left in the spectrum. The number of full energy events can be obtained by simply integrating the total area under the peak. In this work, two identical isotropic neutron sources of Am-Be type, each having an activity of about (175 GBq), were used for irradiating and analyzing unknown samples. Two types of foils,175In and 197Au, were used for monitoring the thermal neutron flux by the foil activation method. A hyper pure germanium (HPGe) detector was used in view of its good energy resolution and good signal-to-noise ratio. The γ-lines with highest intensity were selected and the induced activity analysis was done using Genie 2000 software. The analysis of the γ- spectra was carried out by using K0-IAEA and ETNA programs. The validity of the analysis was confirmed by neutron activation analysis of known and unknown samples.

  10. An Abstract Monte-Carlo Method for the Analysis of Probabilistic Programs

    Monniaux, David

    2007-01-01

    We introduce a new method, combination of random testing and abstract interpretation, for the analysis of programs featuring both probabilistic and non-probabilistic nondeterminism. After introducing "ordinary" testing, we show how to combine testing and abstract interpretation and give formulas linking the precision of the results to the number of iterations. We then discuss complexity and optimization issues and end with some experimental results.

  11. Monte Carlo Few-Group Constant Generation for CANDU 6 Core Analysis

    Seung Yeol Yoo; Hyung Jin Shim; Chang Hyo Kim

    2015-01-01

    The current neutronics design methodology of CANDU-PHWRs based on the two-step calculations requires determining not only homogenized two-group constants for ordinary fuel bundle lattice cells by the WIMS-AECL lattice cell code but also incremental two-group constants arising from the penetration of control devices into the fuel bundle cells by a supercell analysis code like MULTICELL or DRAGON. As an alternative way to generate the two-group constants necessary for the CANDU-PHWR core analys...

  12. Appraisal of Airport Alternatives in Greenland by the use of Risk Analysis and Monte Carlo Simulation

    Salling, Kim Bang; Leleur, Steen

    2007-01-01

    construction cost and the travel time sav-ings. The obtained model results aim to provide an input to informed decision-making based on an account of the level of desired risk as concerns feasibility risks. This level is presented as the probability of obtaining at least a benefit-cost ratio of a specified......This paper presents an appraisal study of three different airport proposals in Greenland by the use of an adapted version of the Danish CBA-DK model. The assessment model is based on both a deterministic calculation by the use of conventional cost-benefit analysis and a stochastic calculation...

  13. Monte Carlo analysis of the slightly enriched uranium-D2O critical experiment LTRIIA (AWBA Development Program)

    The Savannah River Laboratory LTRIIA slightly-enriched uranium-D2O critical experiment was analyzed with ENDF/B-IV data and the RCP01 Monte Carlo program, which modeled the entire assembly in explicit detail. The integral parameters delta25 and delta28 showed good agreement with experiment. However, calculated K/sub eff/ was 2 to 3% low, due primarily to an overprediction of U238 capture. This is consistent with results obtained in similar analyses of the H2O-moderated TRX critical experiments. In comparisons with the VIM and MCNP2 Monte Carlo programs, good agreement was observed for calculated reeaction rates in the B2=0 cell

  14. Acoustic effects analysis utilizing speckle pattern with fixed-particle Monte Carlo

    Vakili, Ali; Hollmann, Joseph A.; Holt, R. Glynn; DiMarzio, Charles A.

    2016-03-01

    Optical imaging in a turbid medium is limited because of multiple scattering a photon undergoes while traveling through the medium. Therefore, optical imaging is unable to provide high resolution information deep in the medium. In the case of soft tissue, acoustic waves unlike light, can travel through the medium with negligible scattering. However, acoustic waves cannot provide medically relevant contrast as good as light. Hybrid solutions have been applied to use the benefits of both imaging methods. A focused acoustic wave generates a force inside an acoustically absorbing medium known as acoustic radiation force (ARF). ARF induces particle displacement within the medium. The amount of displacement is a function of mechanical properties of the medium and the applied force. To monitor the displacement induced by the ARF, speckle pattern analysis can be used. The speckle pattern is the result of interfering optical waves with different phases. As light travels through the medium, it undergoes several scattering events. Hence, it generates different scattering paths which depends on the location of the particles. Light waves that travel along these paths have different phases (different optical path lengths). ARF induces displacement to scatterers within the acoustic focal volume, and changes the optical path length. In addition, temperature rise due to conversion of absorbed acoustic energy to heat, changes the index of refraction and therefore, changes the optical path length of the scattering paths. The result is a change in the speckle pattern. Results suggest that the average change in the speckle pattern measures the displacement of particles and temperature rise within the acoustic wave focal area, hence can provide mechanical and thermal properties of the medium.

  15. Parallelizing Monte Carlo with PMC

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  16. Development and Application of MCNP5 and KENO-VI Monte Carlo Models for the Atucha-2 PHWR Analysis

    O. Mazzantini; F. D'Auria; M. Pecchia; Parisi, C

    2011-01-01

    The geometrical complexity and the peculiarities of Atucha-2 PHWR require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Core models of Atucha-2 PHWR were developed using both MCNP5 and KENO-VI codes. The developed models were applied for calculating reactor criticality states at beginning of life, reactor cell constants, and control rods volumes. The last two applications were relevant for performing successive three dimensional neutron kinetic ana...

  17. Monte Carlo Depletion Analysis of a TRU-Cermet Fuel. Design for a Sodium Cooled Fast Reactor

    Monte Carlo depletion has generally not been considered practical for designing the equilibrium cycle of a reactor. One objective of the work here was to demonstrate that recent advances in high performance computing clusters is making Monte Carlo core depletion competitive with traditional deterministic depletion methods for some applications. The application here was to a sodium fast reactor core with an innovative TRU cermet fuel type. An equilibrium cycle search was performed for a multi-batch core loading using the Monte Carlo depletion code Monteburn. A final fuel design of 38% w/o TRU with a pin radius of 0.32 cm was found to display similar operating characteristics to its metal fueled counterparts. The TRU-cermet fueled core has a smaller sodium void worth, and a less negative axial expansion coefficient. These effects result in a core with safety characteristics similar to the metal fuel design, however, the TRU consumption rate of the cermet fueled core is found to be higher than that of the metal fueled core. (authors)

  18. On applicability of the 3D nodal code DYN3D for the analysis of SFR cores

    DYN3D is an advanced multi-group nodal diffusion code originally developed for the 3D steady-state and transient analysis of the Light Water Reactor (LWR) systems with square and hexagonal fuel assembly geometries. The main objective of this work is to demonstrate the feasibility of using DYN3D for the modeling of Sodium cooled Fast Reactors (SFRs). In this study a prototypic European Sodium Fast Reactor (ESFR) core is simulated by DYN3D using homogenized multi-group cross sections produced with Monte Carlo (MC) reactor physics code Serpent. The results of the full core DYN3D calculations are in a very good agreement with the reference full core Serpent MC solution. (author)

  19. Behavioral Analysis of Visitors to a Medical Institution’s Website Using Markov Chain Monte Carlo Methods

    Tani, Yuji

    2016-01-01

    Background Consistent with the “attention, interest, desire, memory, action” (AIDMA) model of consumer behavior, patients collect information about available medical institutions using the Internet to select information for their particular needs. Studies of consumer behavior may be found in areas other than medical institution websites. Such research uses Web access logs for visitor search behavior. At this time, research applying the patient searching behavior model to medical institution website visitors is lacking. Objective We have developed a hospital website search behavior model using a Bayesian approach to clarify the behavior of medical institution website visitors and determine the probability of their visits, classified by search keyword. Methods We used the website data access log of a clinic of internal medicine and gastroenterology in the Sapporo suburbs, collecting data from January 1 through June 31, 2011. The contents of the 6 website pages included the following: home, news, content introduction for medical examinations, mammography screening, holiday person-on-duty information, and other. The search keywords we identified as best expressing website visitor needs were listed as the top 4 headings from the access log: clinic name, clinic name + regional name, clinic name + medical examination, and mammography screening. Using the search keywords as the explaining variable, we built a binomial probit model that allows inspection of the contents of each purpose variable. Using this model, we determined a beta value and generated a posterior distribution. We performed the simulation using Markov Chain Monte Carlo methods with a noninformation prior distribution for this model and determined the visit probability classified by keyword for each category. Results In the case of the keyword “clinic name,” the visit probability to the website, repeated visit to the website, and contents page for medical examination was positive. In the case of the

  20. CMB quadrupole depression produced by early fast-roll inflation: Monte Carlo Markov chains analysis of WMAP and SDSS data

    Generically, the classical evolution of the inflaton has a brief fast-roll stage that precedes the slow-roll regime. The fast-roll stage leads to a purely attractive potential in the wave equations of curvature and tensor perturbations (while the potential is purely repulsive in the slow-roll stage). This attractive potential leads to a depression of the CMB quadrupole moment for the curvature and B-mode angular power spectra. A single new parameter emerges in this way in the early universe model: the comoving wave number k1 characteristic scale of this attractive potential. This mode k1 happens to exit the horizon precisely at the transition from the fast-roll to the slow-roll stage. The fast-roll stage dynamically modifies the initial power spectrum by a transfer function D(k). We compute D(k) by solving the inflaton evolution equations. D(k) effectively suppresses the primordial power for k1 and possesses the scaling property D(k)=Ψ(k/k1) where Ψ(x) is a universal function. We perform a Monte Carlo Markov chain analysis of the WMAP and SDSS data including the fast-roll stage and find the value k1=0.266 Gpc-1. The quadrupole mode kQ=0.242 Gpc-1 exits the horizon earlier than k1, about one-tenth of an e-fold before the end of fast roll. We compare the fast-roll fit with a fit without fast roll but including a sharp lower cutoff on the primordial power. Fast roll provides a slightly better fit than a sharp cutoff for the temperature-temperature, temperature-E modes, and E modes-E modes. Moreover, our fits provide nonzero lower bounds for r, while the values of the other cosmological parameters are essentially those of the pure ΛCDM model. We display the real space two point CTT(θ) correlator. The fact that kQ exits the horizon before the slow-roll stage implies an upper bound in the total number of e-folds Ntot during inflation. Combining this with estimates during the radiation dominated era we obtain Ntot∼66, with the bounds 62tot<82. We repeated the same

  1. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques.

    Shedlock, Daniel; Haghighat, Alireza

    2005-01-01

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for -10 y of spent fuel pool storage, > 35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are -6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle TRANsport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the SN models. The biased A3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by -20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  2. Neutron analysis of spent fuel storage installation using parallel computing and advance discrete ordinates and Monte Carlo techniques

    In the United States, the Nuclear Waste Policy Act of 1982 mandated centralised storage of spent nuclear fuel by 1988. However, the Yucca Mountain project is currently scheduled to start accepting spent nuclear fuel in 2010. Since many nuclear power plants were only designed for ∼10 y of spent fuel pool storage, >35 plants have been forced into alternate means of spent fuel storage. In order to continue operation and make room in spent fuel pools, nuclear generators are turning towards independent spent fuel storage installations (ISFSIs). Typical vertical concrete ISFSIs are ∼6.1 m high and 3.3 m in diameter. The inherently large system, and the presence of thick concrete shields result in difficulties for both Monte Carlo (MC) and discrete ordinates (SN) calculations. MC calculations require significant variance reduction and multiple runs to obtain a detailed dose distribution. SN models need a large number of spatial meshes to accurately model the geometry and high quadrature orders to reduce ray effects, therefore, requiring significant amounts of computer memory and time. The use of various differencing schemes is needed to account for radial heterogeneity in material cross sections and densities. Two P3, S12, discrete ordinate, PENTRAN (parallel environment neutral-particle Transport) models were analysed and different MC models compared. A multigroup MCNP model was developed for direct comparison to the S N models. The biased A 3MCNP (automated adjoint accelerated MCNP) and unbiased (MCNP) continuous energy MC models were developed to assess the adequacy of the CASK multigroup (22 neutron, 18 gamma) cross sections. The PENTRAN SN results are in close agreement (5%) with the multigroup MC results; however, they differ by ∼20-30% from the continuous-energy MC predictions. This large difference can be attributed to the expected difference between multigroup and continuous energy cross sections, and the fact that the CASK library is based on the old ENDF

  3. Investigations on Monte Carlo based coupled core calculations

    The present trend in advanced and next generation nuclear reactor core designs is towards increased material heterogeneity and geometry complexity. The continuous energy Monte Carlo method has the capability of modeling such core environments with high accuracy. This paper presents results from feasibility studies being performed at the Pennsylvania State University (PSU) on both accelerating Monte Carlo criticality calculations by using hybrid nodal diffusion Monte Carlo schemes and thermal-hydraulic feedback modeling in Monte Carlo core calculations. The computation process is greatly accelerated by calculating the three-dimensional (3D) distributions of fission source and thermal-hydraulics parameters with the coupled NEM/COBRA-TF code and then using coupled MCNP5/COBRA-TF code to fine tune the results to obtain an increased accuracy. The PSU NEM code employs cross-sections generated by MCNP5 for pin-cell based nodal compositions. The implementation of different code modifications facilitating coupled calculations are presented first. Then the coupled hybrid Monte Carlo based code system is applied to a 3D 2*2 pin array extracted from a Boiling Water Reactor (BWR) assembly with reflective radial boundary conditions. The obtained results are discussed and it is showed that performing Monte-Carlo based coupled core steady state calculations are feasible. (authors)

  4. Monte Carlo dose mapping on deforming anatomy

    Zhong, Hualiang; Siebers, Jeffrey V.

    2009-10-01

    This paper proposes a Monte Carlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxel's deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patient's IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patient's entire dose region, respectively. As a 4D Monte Carlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D Monte Carlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA.

  5. Analysis of low pressure electro-positive and electro-negative rf plasmas with Monte Carlo method

    M. ARDEHALI

    1998-01-01

    Particle-in-cell/Monte Carlo technique is used to simulate low pressure electro-negative and electro-positive plasmas at a frequency of 10 MHz. The potential, electric field, electron and ion density, and currents flowing across the plasma are presented. To compare the physical properties of the electro-positive gas with those of an electro-negative gas, the input voltage was decreased from 1000 Volts to 350 Volts. The simulation results indicate that the introduction of negative ions induces...

  6. A combined Monte Carlo and experimental analysis of light emission phenomena in AlGaAs/GaAs HBTs

    Di Carlo, Aldo; Lugli, Paolo; Canali, Claudio; Malik, Roger; Manfredi, Manfredo; Neviani, Andrea; Zanoni, Enrico; Zandler, Günther

    1998-08-01

    We present a detailed investigation of light emission phenomena connected with the presence of hot carriers in AlGaAs/GaAs heterojunction bipolar transistors. Electrons heated by the strong electric field at the base-collector junction lead to both impact ionization and light emission. A new general-purpose weighted Monte Carlo procedure has been developed to study such effects. The measured hot electroluminescence is attributed to radiative recombinations within the valence and the conduction bands. Good agreement is found between theory and experiment.

  7. A Monte Carlo Method for the Analysis of Gamma Radiation Transport from Distributed Sources in Laminated Shields

    A description is given of a method for calculating the penetration and energy deposition of gamma radiation, based on Monte Carlo techniques. The essential feature is the application of the exponential transformation to promote the transport of penetrating quanta and to balance the steep spatial variations of the source distributions which appear in secondary gamma emission problems. The estimated statistical errors in a number of sample problems, involving concrete shields with thicknesses up to 500 cm, are shown to be quite favorable, even at relatively short computing times. A practical reactor shielding problem is also shown and the predictions compared with measurements

  8. Monte Carlo perturbation analysis on isothermal temperature reactivity coefficient of light-water moderated and reflected critical assembly

    Experiments have been carried out on the isothermal temperature reactivity coefficient (ITRC) for the light-water moderated core at the Kyoto University Critical Assembly. The temperature effect on reactivity is analyzed by the Seoul National University Monte Carlo (MC) code, McCARD, which well reproduce experimental data. The contributions of the each isotope by the density changes of the core and reflector regions and the microscopic cross section changes to the ITRCs are quantified by sensitivity analyses based on the MC adjoint-weighted perturbation methods. (author)

  9. An analysis of the OI 1304 A dayglow using a Monte Carlo resonant scattering model with partial frequency redistribution

    Meier, R. R.; Lee, J.-S.

    1982-01-01

    The transport of resonance radiation under optically thick conditions is shown to be accurately described by a Monte Carlo model of the atomic oxygen 1304 A airglow triplet in which partial frequency redistribution, temperature gradients, pure absorption and multilevel scattering are accounted for. All features of the data can be explained by photoelectron impact excitation and the resonant scattering of sunlight, where the latter source dominates below 100 and above 500 km and is stronger at intermediate altitudes than previously thought. It is concluded that the OI 1304 A emission can be used in studies of excitation processes and atomic oxygen densities in planetary atmospheres.

  10. Variance analysis of the Monte Carlo perturbation source method in inhomogeneous linear particle transport problems. Derivation of formulae

    The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered

  11. Neutronic analysis for conversion of the Ghana Research Reactor-1 facility using Monte Carlo methods and UO{sub 2} LEU fuel

    Anim-Sampong, S.; Akaho, E.H.K.; Maakuu, B.T.; Gbadago, J.K. [Ghana Research Reactor-1 Centre, Dept. of Nuclear Engineering and Materials Science, National Nuclear Research Institute, Ghana Atomic Energy Commission, Legon, Accra (Ghana); Andam, A. [Kwame Nkrumah Univ. of Science and Technology, Dept. of Physics (Ghana); Liaw, J.J.R.; Matos, J.E. [Argonne National Lab., RERTR Programme, Div. of Nuclear Engineering (United States)

    2007-07-01

    Monte Carlo particle transport methods and software (MCNP) have been applied to the modelling, simulation and neutronic analysis for the conversion of the HEU-fuelled (high enrichment uranium) core of the Ghana Research Reactor-1 (GHARR-1) facility. The results show that the MCNP model of the GHARR-1 facility, which is a commercial version of the Miniature Neutron Source Reactor (MNSR) is good as the simulated neutronic and other reactor physics parameters agree with very well with experimental and zero power results. Three UO{sub 2} LEU (low enrichment uranium) fuels with different enrichments (12.6% and 19.75%), core configurations, core loadings were utilized in the conversion studies. The nuclear criticality and kinetic parameters obtained from the Monte Carlo simulation and neutronic analysis using three UO{sub 2} LEU fuels are in close agreement with results obtained for the reference 90.2% U-Al HEU core. The neutron flux variation in the core, fission chamber and irradiation channels for the LEU UO{sub 2} fuels show the same trend as the HEU core as presented in the paper. The Monte Carlo model confirms a reduction (8% max) in the peak neutron fluxes simulated in the irradiation channels which are utilized for experimental and commercial activities. However, the reductions or 'losses' in the flux levels neither affects the criticality safety, reactor operations and safety nor utilization of the reactor. Employing careful core loading optimization techniques and fuel loadings and enrichment, it is possible to eliminate the apparent reductions or 'losses' in the neutron fluxes as suggested in this paper. Concerning neutronics, it can be concluded that all the 3 LEU fuels qualify as LEU candidates for core conversion of the GHARR-1 facility.

  12. The Self-Gravitating Gas in the Presence of Dark Energy: Monte-Carlo Simulations and Stability Analysis

    De Vega, H J

    2004-01-01

    The self-gravitating gas in the presence of a positive cosmological constant Lambda is studied in thermal equilibrium by Monte Carlo simulations and by the mean field approach. We find excellent agreement between both approaches already for N = 1000 particles on a volume $V$ [The mean field is exact in the infinite N limit]. The domain of stability of the gas is found to increase when the cosmological constant increases. The particle density is shown to be an increasing (decreasing) function of the distance when the dark energy dominates over self-gravity (and vice-versa).We confirm the validity of the thermodynamic limit: N, V -> infty with N/V^{1/3} and Lambda V^{2/3} fixed. In such dilute limit extensive thermodynamic quantities like energy, free energy, entropy turn to be proportional to N. We find that the gas is stable till the isothermal compressibility diverges. Beyond this point the gas becomes a extremely dense object whose properties are studied by Monte Carlo.

  13. Monte Carlo simulation and Boltzmann equation analysis of non-conservative positron transport in H{sub 2}

    Bankovic, A., E-mail: ana.bankovic@gmail.com [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Dujko, S. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Centrum Wiskunde and Informatica (CWI), P.O. Box 94079, 1090 GB Amsterdam (Netherlands); ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); White, R.D. [ARC Centre for Antimatter-Matter Studies, School of Engineering and Physical Sciences, James Cook University, Townsville, QLD 4810 (Australia); Buckman, S.J. [ARC Centre for Antimatter-Matter Studies, Australian National University, Canberra, ACT 0200 (Australia); Petrovic, Z.Lj. [Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia)

    2012-05-15

    This work reports on a new series of calculations of positron transport properties in molecular hydrogen under the influence of spatially homogeneous electric field. Calculations are performed using a Monte Carlo simulation technique and multi term theory for solving the Boltzmann equation. Values and general trends of the mean energy, drift velocity and diffusion coefficients as a function of the reduced electric field E/n{sub 0} are reported here. Emphasis is placed on the explicit and implicit effects of positronium (Ps) formation on the drift velocity and diffusion coefficients. Two important phenomena arise; first, for certain regions of E/n{sub 0} the bulk and flux components of the drift velocity and longitudinal diffusion coefficient are markedly different, both qualitatively and quantitatively. Second, and contrary to previous experience in electron swarm physics, there is negative differential conductivity (NDC) effect in the bulk drift velocity component with no indication of any NDC for the flux component. In order to understand this atypical manifestation of the drift and diffusion of positrons in H{sub 2} under the influence of electric field, the spatially dependent positron transport properties such as number of positrons, average energy and velocity and spatially resolved rate for Ps formation are calculated using a Monte Carlo simulation technique. The spatial variation of the positron average energy and extreme skewing of the spatial profile of positron swarm are shown to play a central role in understanding the phenomena.

  14. Predictive uncertainty analysis of a highly heterogeneous field-scale groundwater model using null-space Monte Carlo

    Hart, D.; Yoon, H.; McKenna, S. A.

    2011-12-01

    Quantification of prediction uncertainty resulting from estimated parameters is critical to provide accurate predictive models for field-scale groundwater flow and transport problems. We examine and compare two approaches to defining predictive uncertainty where both approaches utilize pilot points to parameterize spatially heterogeneous fields. The first approach is the independent calibration of multiple initial "seed" fields created through geostatistical simulation and conditioned to observation data, resulting in an ensemble of calibrated property fields that defines uncertainty in the calibrated parameters. The second approach is the null-space Monte Carlo (NSMC) method that employs a decomposition of the Jacobian matrix from a single calibration to define a minimum number of linear combinations of parameters that account for the majority of the sensitivity of the overall calibration to the observed data. Random vectors are applied to the remaining linear combinations of parameters, the null space, to create an ensemble of fields, each of which remains calibrated to the data. We compare these two approaches using a highly-parameterized groundwater model of the Culebra dolomite in southeastern New Mexico. Observation data include two decades of steady-state head measurements and pumping test results. The predictive performance measure is advective travel time from a point to a prescribed boundary. Calibrated parameters at a set of pilot points include transmissivity, the horizontal hydraulic anisotropy, the storativity, and a section of recharge (> 1200 parameters in total). First, we calibrate 200 multiple random seed fields generated through geostatistical simulation conditioned to observation data. The 11 fields that contain the best and worst scenarios in terms of calibration and travel time analysis among the best 100 calibrated results provide a basis for the NSMC method. The NSMC method is used to generate 200 calibration-constrained parameter fields

  15. Monte Carlo Calculations Applied to NRU Reactor and Radiation Physics Analyses

    G.B. Wilkin; Nguyen, T. S.

    2012-01-01

    The statistical MCNP (Monte Carlo N-Particle) code has been satisfactorily used for reactor and radiation physics calculations to support NRU operation and analysis. MCNP enables 3D modeling of the reactor and its components in great detail, the transport calculation of photons (in addition to neutrons), and the capability to model all locations in space, which are beyond the capabilities of the deterministic neutronics methods used for NRU. While the simple single-cell model is efficient for...

  16. Using Monte Carlo transport to accurately predict isotope production and activation analysis rates at the University of Missouri research reactor

    A detailed Monte Carlo N-Particle Transport Code (MCNP5) model of the University of Missouri research reactor (MURR) has been developed. The ability of the model to accurately predict isotope production rates was verified by comparing measured and calculated neutron- capture reaction rates for numerous isotopes. In addition to thermal (1/v) monitors, the benchmarking included a number of isotopes whose (n, γ) reaction rates are very sensitive to the epithermal portion of the neutron spectrum. Using the most recent neutron libraries (ENDF/ B-VII.0), the model was able to accurately predict the measured reaction rates in all cases. The model was then combined with ORIGEN 2.2, via MONTEBURNS 2.0, to calculate production of 99Mo from fission of low-enriched uranium foils. The model was used to investigate both annular and plate LEU foil targets in a variety of arrangements in a graphite irradiation wedge to optimize the production of 99Mo. (author)

  17. Experimental and Monte Carlo analysis of near-breakdown phenomena in GaAs-based heterostructure FETs

    Sleiman, A.; Di Carlo, A.; Tocca, L.; Lugli, P.; Zandler, G.; Meneghesso, G.; Zanoni, E.; Canali, C.; Cetronio, A.; Lanzieri, M.; Peroni, M.

    2001-05-01

    We present experimental and theoretical data related to the impact ionization in the near-breakdown regime of AlGaAs/InGaAs pseudomorphic high-electron-mobility transistors (P-HEMTs) and AlGaAs/GaAs heterostructure field effect transistors (HFETs). Room-temperature electroluminescence spectra of P-HEMT exhibit a maximum around the InGaAs energy gap (1.3 eV). Two peaks have been observed for the HFETs. These experiments are interpreted by means of Monte Carlo simulations. The most important differences between the two devices are found in the hole distribution. While the holes in the P-HEMT are confined in the gate-source channel region and responsible for the breakdown, they are absent from the active part of the HFET. This absence reduces the feedback and improves the on-state breakdown voltage characteristics.

  18. Periodic structures in the Franck-Hertz experiment with neon: Boltzmann equation and Monte-Carlo analysis

    White, R. D.; Robson, R. E.; Nicoletopoulos, P.; Dujko, S.

    2012-05-01

    The Franck-Hertz experiment with neon gas is modelled as an idealised steady-state Townsend experiment and analysed theoretically using (a) multi-term solution of Boltzmann equation and (b) Monte-Carlo simulation. Theoretical electron periodic electron structures, together with the `window' of reduced fields in which they occur, are compared with experiment, and it is explained why it is necessary to account for all competing scattering processes in order to explain the observed experimental `wavelength'. The study highlights the fundamental flaws in trying to explain the observations in terms of a single, assumed dominant electronic excitation process, as is the case in text books and the myriad of misleading web sites.

  19. An Analysis on the Calculation Efficiency of the Responses Caused by the Biased Adjoint Fluxes in Hybrid Monte Carlo Simulation

    Khuat, Quang Huy; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho [Hanyang University, Seoul (Korea, Republic of)

    2015-05-15

    This technique is known as Consistent Adjoint Driven Importance Sampling (CADIS) method and it is implemented in SCALE code system. In the CADIS method, adjoint transport equation has to be solved to determine deterministic importance functions. Using the CADIS method, a problem was noted that the biased adjoint flux estimated by deterministic methods can affect the calculation efficiency and error. The biases of adjoint function are caused by the methodology, calculation strategy, tolerance of result calculated by the deterministic method and inaccurate multi-group cross section libraries. In this paper, a study to analyze the influence of the biased adjoint functions into Monte Carlo computational efficiency is pursued. In this study, a method to estimate the calculation efficiency was proposed for applying the biased adjoint fluxes in the CADIS approach. For a benchmark problem, the responses and FOMs using SCALE code system were evaluated as applying the adjoint fluxes. The results show that the biased adjoint fluxes significantly affects the calculation efficiencies.

  20. On-board Dose Measurement and its Monte Carlo Analysis in a Low Level Waste Shipping Vessel

    On-board dose measurements were made in a shipping vessel for low level radioactive wastes, the Seiei Maru. The measured values are much smaller than the regulation values both on the hatch covers and in the accommodation area. The dose equivalent rates on the hatch cover are analysed by using a continuous energy Monte Carlo code, MCNP 4B, with two kinds of calculational models. One is the detailed model with the geometry of containers and LLW drums, and an asymmetrical source distribution. The results of the detailed calculation approached the shape of the measured dose rate distribution graphs. The other is the simplified model that mixes source volume uniformly. The calculated values obtained with the simplified model are twice as large as those calculated with the detailed model. (author)

  1. New Dynamic Monte Carlo Renormalization Group Method

    Lacasse, Martin-D.; Vinals, Jorge; Grant, Martin

    1992-01-01

    The dynamical critical exponent of the two-dimensional spin-flip Ising model is evaluated by a Monte Carlo renormalization group method involving a transformation in time. The results agree very well with a finite-size scaling analysis performed on the same data. The value of $z = 2.13 \\pm 0.01$ is obtained, which is consistent with most recent estimates.

  2. Autocorrelations in hybrid Monte Carlo simulations

    Simulations of QCD suffer from severe critical slowing down towards the continuum limit. This problem is known to be prominent in the topological charge, however, all observables are affected to various degree by these slow modes in the Monte Carlo evolution. We investigate the slowing down in high statistics simulations and propose a new error analysis method, which gives a realistic estimate of the contribution of the slow modes to the errors. (orig.)

  3. Simulated Annealing using Hybrid Monte Carlo

    Salazar, Rafael; Toral, Raúl

    1997-01-01

    We propose a variant of the simulated annealing method for optimization in the multivariate analysis of differentiable functions. The method uses global actualizations via the hybrid Monte Carlo algorithm in their generalized version for the proposal of new configurations. We show how this choice can improve upon the performance of simulated annealing methods (mainly when the number of variables is large) by allowing a more effective searching scheme and a faster annealing schedule.

  4. Proton Upset Monte Carlo Simulation

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  5. Monte Carlo Simulation for Particle Detectors

    Pia, Maria Grazia

    2012-01-01

    Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...

  6. A Monte Carlo solution to skyshine radiation

    A Monte Carlo method was used to calculate the skyshine doses from 2-ft exposure cell ceiling of an accelerator. Modifications were made to the Monte Carlo program MORSE code to perform this analysis. Adjoint mode calculations provided optimum Russian roulette and splitting parameters which were later used in the forward mode calculations. Russian roulette and splitting were used at the collision sites and at boundary crossings. Exponential transform was used for particle pathlength stretching. The TIGER code was used to generate the anisotropic source term and P5 Legendre expansion was used to compute the cross sections. Where negative fluxes occured at detector locations due to large angle scatterings, a macroscopic cross section data bank was used to make Klein-Nishina and pair production flux estimates. With the above modifications, sixty detectors at locations ranging from 10 to 300 ft from the cell wall showed good statistical responses (5 to 10% fsd)

  7. The use of Monte-Carlo simulation and order statistics for uncertainty analysis of a LBLOCA transient (LOFT-L2-5)

    Best estimate computer codes are increasingly used in nuclear industry for the accident management procedures and have been planned to be used for the licensing procedures. Contrary to conservative codes which are supposed to give penalizing results, best estimate codes attempt to calculate accidental transients in a realistic way. It becomes therefore of prime importance, in particular for technical organization as IRSN in charge of safety assessment, to know the uncertainty on the results of such codes. Thus, CSNI has sponsored few years ago (published in 1998) the Uncertainty Methods Study (UMS) program on uncertainty methodologies used for a SBLOCA transient (LSTF-CL-18) and is now supporting the BEMUSE program for a LBLOCA transient (LOFT-L2-5). The large majority of BEMUSE participants (9 out of 10) use uncertainty methodologies based on a probabilistic modelling and all of them use Monte-Carlo simulations to propagate the uncertainties through their computer codes. Also, all of 'probabilistic participants' intend to use order statistics to determine the sampling size of the Monte-Carlo simulation and to derive the uncertainty ranges associated to their computer calculations. The first aim of this paper is to remind the advantages and also the assumptions of the probabilistic modelling and more specifically of order statistics (as Wilks' formula) in uncertainty methodologies. Indeed Monte-Carlo methods provide flexible and extremely powerful techniques for solving many of the uncertainty propagation problems encountered in nuclear safety analysis. However it is important to keep in mind that probabilistic methods are data intensive. That means, probabilistic methods cannot produce robust results unless a considerable body of information has been collected. A main interest of the use of order statistics results is to allow to take into account an unlimited number of uncertain parameters and, from a restricted number of code calculations to provide statistical

  8. Monte Carlo and thermal-hydraulic coupling via PVMEXEC

    Successful high-fidelity coupling between a Monte Carlo neutron transport solver and a subchannel thermal-hydraulics solver has been achieved using PVMEXEC, a coupling frame-work developed for analysis of transient phenomenon in nuclear reactors. The PVMEXEC framework provides a generic program interface for exchanging data between solver kernels for different physical processes, such as radiation transport, heat conduction, and fluid flow. In this study, PVMEXEC was used to couple the in-house Monte Carlo radiation transport code, MC21, with a locally modified version of COBRA-TF. In this coupling scheme, MC21 is responsible for calculating three-dimensional power distributions and COBRA-TF for calculating local fluid temperatures and densities, as well as fuel temperatures. The coupled system was used to analyze 3D single-pin and assembly models based on the Calvert Cliffs commercial PWR. Convergence properties of the coupled simulations are examined and results are compared to simulations conducted using the existing integrated thermal feedback kernel in MC21. (author)

  9. Monte Carlo Particle Lists: MCPL

    Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi

    2016-01-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.

  10. SPECIAL ISSUE DEVOTED TO MULTIPLE RADIATION SCATTERING IN RANDOM MEDIA: Estimate of the melanin content in human hairs by the inverse Monte-Carlo method using a system for digital image analysis

    Bashkatov, A. N.; Genina, Elina A.; Kochubei, V. I.; Tuchin, Valerii V.

    2006-12-01

    Based on the digital image analysis and inverse Monte-Carlo method, the proximate analysis method is deve-loped and the optical properties of hairs of different types are estimated in three spectral ranges corresponding to three colour components. The scattering and absorption properties of hairs are separated for the first time by using the inverse Monte-Carlo method. The content of different types of melanin in hairs is estimated from the absorption coefficient. It is shown that the dominating type of melanin in dark hairs is eumelanin, whereas in light hairs pheomelanin dominates.

  11. SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations

    Baes, Maarten

    2015-01-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can...

  12. Monte Carlo optimization of sample dimensions of an {sup 241}Am-Be source-based PGNAA setup for water rejects analysis

    Idiri, Z. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria)]. E-mail: zmidiri@yahoo.fr; Mazrou, H. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria); Beddek, S. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria); Amokrane, A. [Faculte de Physique, Universite des Sciences et de la Technologie Houari-Boumediene (USTHB), Alger (Algeria); Azbouche, A. [Centre de Recherche Nucleaire d' Alger (CRNA), 02 Boulevard Frantz-Fanon, B.P. 399, 16000 Alger (Algeria)

    2007-07-21

    The present paper describes the optimization of sample dimensions of a {sup 241}Am-Be neutron source-based Prompt gamma neutron activation analysis (PGNAA) setup devoted for in situ environmental water rejects analysis. The optimal dimensions have been achieved following extensive Monte Carlo neutron flux calculations using MCNP5 computer code. A validation process has been performed for the proposed preliminary setup with measurements of thermal neutron flux by activation technique of indium foils, bare and with cadmium covered sheet. Sensitive calculations were subsequently performed to simulate real conditions of in situ analysis by determining thermal neutron flux perturbations in samples according to chlorine and organic matter concentrations changes. The desired optimal sample dimensions were finally achieved once established constraints regarding neutron damage to semi-conductor gamma detector, pulse pile-up, dead time and radiation hazards were fully met.

  13. A Monte Carlo template-based analysis for very high definition imaging atmospheric Cherenkov telescopes as applied to the VERITAS telescope array

    ,

    2015-01-01

    We present a sophisticated likelihood reconstruction algorithm for shower-image analysis of imaging Cherenkov telescopes. The reconstruction algorithm is based on the comparison of the camera pixel amplitudes with the predictions from a Monte Carlo based model. Shower parameters are determined by a maximisation of a likelihood function. Maximisation of the likelihood as a function of shower fit parameters is performed using a numerical non-linear optimisation technique. A related reconstruction technique has already been developed by the CAT and the H.E.S.S. experiments, and provides a more precise direction and energy reconstruction of the photon induced shower compared to the second moment of the camera image analysis. Examples are shown of the performance of the analysis on simulated gamma-ray data from the VERITAS array.

  14. Energy dispersive X-ray fluorescence spectroscopy/Monte Carlo simulation approach for the non-destructive analysis of corrosion patina-bearing alloys in archaeological bronzes: The case of the bowl from the Fareleira 3 site (Vidigueira, South Portugal)

    Energy dispersive X-ray fluorescence (EDXRF) is a well-known technique for non-destructive and in situ analysis of archaeological artifacts both in terms of the qualitative and quantitative elemental composition because of its rapidity and non-destructiveness. In this study EDXRF and realistic Monte Carlo simulation using the X-ray Monte Carlo (XRMC) code package have been combined to characterize a Cu-based bowl from the Iron Age burial from Fareleira 3 (Southern Portugal). The artifact displays a multilayered structure made up of three distinct layers: a) alloy substrate; b) green oxidized corrosion patina; and c) brownish carbonate soil-derived crust. To assess the reliability of Monte Carlo simulation in reproducing the composition of the bulk metal of the objects without recurring to potentially damaging patina's and crust's removal, portable EDXRF analysis was performed on cleaned and patina/crust coated areas of the artifact. Patina has been characterized by micro X-ray Diffractometry (μXRD) and Back-Scattered Scanning Electron Microscopy + Energy Dispersive Spectroscopy (BSEM + EDS). Results indicate that the EDXRF/Monte Carlo protocol is well suited when a two-layered model is considered, whereas in areas where the patina + crust surface coating is too thick, X-rays from the alloy substrate are not able to exit the sample. - Highlights: • EDXRF/Monte Carlo simulation is used to characterize an archeological alloy. • EDXRF analysis was performed on cleaned and patina coated areas of the artifact. • EDXRF/Montes Carlo protocol is well suited when a two-layered model is considered. • When the patina is too thick, X-rays from substrate are unable to exit the sample

  15. Monte Carlo design of a system for the detection of explosive materials and analysis of the dose

    The problems associated with insecurity and terrorism have forced to designing systems for detecting nuclear materials, drugs and explosives that are installed on roads, ports and airports. Organic materials are composed of C, H, O and N; similarly the explosive materials are manufactured which can be distinguished by the concentration of these elements. Its elemental composition, particularly the concentration of hydrogen and oxygen, allow distinguish them from other organic substances. When these materials are irradiated with neutrons nuclear reactions (n, γ) are produced, where the emitted photons are ready gamma rays whose energy is characteristic of each element and its abundance allows estimating their concentration. The aim of this study was designed using Monte Carlo methods a system with neutron source, gamma rays detector and moderator able to distinguish the presence of Rdx and urea. In design were used as moderators: paraffin, light water, polyethylene and graphite; as detectors were used HPGe and the NaI(Tl). The design that showed the best performance was the moderator of light water and HPGe, with a source of 241AmBe. For this design, the values of ambient dose equivalent around the system were calculated. (Author)

  16. Spent-fuel assay performance and Monte Carlo Analysis of the Rensselaer slowing-down-time spectrometer

    The slowing-down-time method for the nondestructive assay of light water reactor (LWR) spent fuel is under development at Rensselaer Polytechnic Institute. A series of assay measurements of an LWR fuel assembly replica were carried out at the Rensselaer lead slowing-down-time spectrometer facility by using 238U and 232Th threshold fission detectors and 235U and 239Pu probe chambers. An assay model relating the assay signal and the signals of the probe chambers to the unknown masses of the fissile isotopes in the fuel assembly was developed. The probe chamber data were used to provide individual fission counting spectra of 235U and 239Pu inside the fuel assembly and to simulate spent-fuel assay signals. The fissile isotopic contents of the fuel were determined to better than 1%. Monte Carlo analyses were performed to simulate the experimental measurements, determine certain parameters of the assay system, and investigate the effect of the fuel assembly and hydrogen impurities on the performance of the system. The broadened resolution of the system caused by the presence of the fuel was still found to be sufficient for the accurate and separate assay of the uranium and plutonium fissiles in spent fuel

  17. Analysis of the TRIGA Mark-II benchmark IEU-COMP-THERM-003 with Monte Carlo code MVP

    The benchmark experiments of the TRIGA Mark-II reactor in the ICSBEP handbook have been analyzed with the Monte Carlo code MVP using the cross section libraries based on JENDL-3.3, JENDL-3.2 and ENDF/B-VI.8. The MCNP calculations have been also performed with the ENDF/B-VI.6 library for comparison between the MVP and MCNP results. For both cores labeled 132 and 133, which have different core configurations, the ratio of the calculated to the experimental results (C/E) for keff obtained by the MVP code is 0.999 for JENDL-3.3, 1.003 for JENDL-3.2, and 0.998 for ENDF/B-VI.8. For the MCNP code, the C/E values are 0.998 for both Core 132 and 133. All the calculated results agree with the reference values within the experimental uncertainties. The results obtained by MVP with ENDF/B-VI.8 and MCNP with ENDF/B-VI.6 differ only by 0.02% for Core 132, and by 0.01% for Core 133. (author)

  18. Monte Carlo model for the analysis and development of III-V Tunnel-FETs and Impact Ionization-MOSFETs

    Talbo, V.; Mateos, J.; González, T.; Lechaux, Y.; Wichmann, N.; Bollaert, S.; Vasallo, B. G.

    2015-10-01

    Impact-ionization metal-oxide-semiconductor FETs (I-MOSFETs) are in competition with tunnel FETs (TFETs) in order to achieve the best behaviour for low power logic circuits. Concretely, III-V I-MOSFETs are being explored as promising devices due to the proper reliability, since the impact ionization events happen away from the gate oxide, and the high cutoff frequency, due to high electron mobility. To facilitate the design process from the physical point of view, a Monte Carlo (MC) model which includes both impact ionization and band-to-band tunnel is presented. Two ungated InGaAs and InAlAs/InGaAs 100 nm PIN diodes have been simulated. In both devices, the tunnel processes are more frequent than impact ionizations, so that they are found to be appropriate for TFET structures and not for I- MOSFETs. According to our simulations, other narrow bandgap candidates for the III-V heterostructure, such as InAs or GaSb, and/or PININ structures must be considered for a correct I-MOSFET design.

  19. Benchmark analysis of reactivity experiment in the TRIGA Mark 2 reactor using a continuous energy Monte Carlo code MCNP

    A good model on experimental data (criticality, control rod worth, and fuel element worth distributions) is encouraged to provide from the Musashi-TRIGA Mark 2 reactor. In the previous paper, as the keff values for different fuel loading patterns had been provided ranging from the minimum core to the full one, the data would be candidate for an ICSBEP evaluation. Evaluation of the control rod worth and fuel element worth distributions presented in this paper could be an excellent benchmark data applicable for validation of calculation technique used in the field of modern research reactor. As a result of simulation on the TRIGA-2 benchmark experiment, which was performed by three-dimensional continuous-energy Monte Carlo code (MCNP4A), it was found that the MCNP calculated values of control rod worth were consisted to the experimental data for both rod-drop and period methods. And for the fuel and the graphite element worth distributions, the MCNP calculated values agreed well with the measured ones though consideration of real control rod positions was needed for calculating fuel element reactivity positioned in inner ring. (G.K.)

  20. Experimental and Monte-Carlo absolute efficiency calibration of HPGE γ-ray spectrometer for application in neutron activation analysis

    High Purity Germanium (HPGe) detector is widely used to measure the γ-rays from neutron activated foils used for neutron spectra measurement due to its better energy resolution and photopeak efficiency. To determine the neutron induced activity in foils, it is very important to carry out absolute calibration for photo-peak efficiency in a wide range of γ-ray energy.Neutron activated foils are considered as extended γ-ray sources. The sources available for efficiency calibration are usually point sources. Therefore it is difficult to determine the photo-peak efficiency for extended sources using these point sources. A method has been developed to address this problem. This method is a combination of experimental measurement with point sources and development of an optimized model for Monte-Carlo N-Particle Code (MCNP) with the help of these experimental measurements. This MCNP model then can be used to find the photo-peak efficiency for any kind of source at any energy. (author)

  1. DS86 neutron dose: Monte Carlo analysis for depth profile of 152Eu activity in a large stone sample.

    Endo, S; Iwatani, K; Oka, T; Hoshi, M; Shizuma, K; Imanaka, T; Takada, J; Fujita, S; Hasai, H

    1999-06-01

    The depth profile of 152Eu activity induced in a large granite stone pillar by Hiroshima atomic bomb neutrons was calculated by a Monte Carlo N-Particle Transport Code (MCNP). The pillar was on the Motoyasu Bridge, located at a distance of 132 m (WSW) from the hypocenter. It was a square column with a horizontal sectional size of 82.5 cm x 82.5 cm and height of 179 cm. Twenty-one cells from the north to south surface at the central height of the column were specified for the calculation and 152Eu activities for each cell were calculated. The incident neutron spectrum was assumed to be the angular fluence data of the Dosimetry System 1986 (DS86). The angular dependence of the spectrum was taken into account by dividing the whole solid angle into twenty-six directions. The calculated depth profile of specific activity did not agree with the measured profile. A discrepancy was found in the absolute values at each depth with a mean multiplication factor of 0.58 and also in the shape of the relative profile. The results indicated that a reassessment of the neutron energy spectrum in DS86 is required for correct dose estimation. PMID:10494148

  2. Medium-range order in alkali metaphosphate glasses and melts investigated by reverse Monte Carlo simulations and diffraction analysis

    Reverse Monte Carlo simulations have been performed on the alkali metaphosphate glasses Na0.5Li0.5PO3 and LiPO3 concerning structural experimental data obtained by neutron and x-ray diffraction at 300 K for both systems and versus temperature up to the melting point for the mixed composition. It appears that the contrast effect due to the negative scattering length of Li is not the only reason for the difference in the intensity of the prepeak observed in both systems. The main structural difference lies in the intermediate-range order, while the short-range order is quite similar in both systems. Moreover, it is shown that the intensity increase of the prepeak in the Na0.5Li0.5PO3 structure factor is due to the partial structure factors of the PO4 tetrahedron, sustaining the hypothesis of an ordering between several PO4 tetrahedra and voids with temperature

  3. A Monte Carlo Analysis of Weight Data from UF6 Cylinder Feed and Withdrawal Stations

    Garner, James R [ORNL; Whitaker, J Michael [ORNL

    2015-01-01

    As the number of nuclear facilities handling uranium hexafluoride (UF6) cylinders (e.g., UF6 production, enrichment, and fuel fabrication) increase in number and throughput, more automated safeguards measures will likely be needed to enable the International Atomic Energy Agency (IAEA) to achieve its safeguards objectives in a fiscally constrained environment. Monitoring the process data from the load cells built into the cylinder feed and withdrawal (F/W) stations (i.e., cylinder weight data) can significantly increase the IAEA’s ability to efficiently achieve the fundamental safeguards task of confirming operations as declared (i.e., no undeclared activities). Researchers at the Oak Ridge National Laboratory, Los Alamos National Laboratory, the Joint Research Center (in Ispra, Italy), and University of Glasgow are investigating how this weight data can be used for IAEA safeguards purposes while fully protecting the operator’s proprietary and sensitive information related to operations. A key question that must be resolved is, what is the necessary frequency of recording data from the process F/W stations to achieve safeguards objectives? This paper summarizes Monte Carlo simulations of typical feed, product, and tails withdrawal cycles and evaluates longer sampling frequencies to determine the expected errors caused by low-frequency sampling and its impact on material balance calculations.

  4. Modelling cerebral blood oxygenation using Monte Carlo XYZ-PA

    Zam, Azhar; Jacques, Steven L.; Alexandrov, Sergey; Li, Youzhi; Leahy, Martin J.

    2013-02-01

    Continuous monitoring of cerebral blood oxygenation is critically important for the management of many lifethreatening conditions. Non-invasive monitoring of cerebral blood oxygenation with a photoacoustic technique offers advantages over current invasive and non-invasive methods. We introduce a Monte Carlo XYZ-PA to model the energy deposition in 3D and the time-resolved pressures and velocity potential based on the energy absorbed by the biological tissue. This paper outlines the benefits of using Monte Carlo XYZ-PA for optimization of photoacoustic measurement and imaging. To the best of our knowledge this is the first fully integrated tool for photoacoustic modelling.

  5. Use of Monte Carlo simulation for computational analysis of critical systems on IPPE's facility addressing needs of nuclear safety

    Pavlova, Olga; Tsibulya, Anatoly [FSUE ' SSC RF-IPPE' , 249033, Bondarenko Square 1, Obninsk (Russian Federation)

    2008-07-01

    The critical facility BFS-1 critical facility was built at the Institute of Physics and Power Engineering (Obninsk, Russia) for full-scale modeling of fast-reactor cores, blankets, in-vessel shielding, and storage. Whereas BFS-1 is a fast-reactor assembly; however, it is a very flexible assembly that can easily be reconfigured to represent numerous other types of reactor designs. This paper describes specific problems with calculation of evaluation neutron physics characteristics of integral experiments performed on BFS facility. The analysis available integral experiments performed on different critical configuration of BFS facility were performed. Calculations of criticality, central reaction rate ratios, and fission rate distributions were carried out by the MCNP5 Monte-Carlo code with different files of evaluated nuclear data. MCNP calculations with multigroup library with 299 energy groups were also made for comparison with pointwise library calculations. (authors)

  6. Overview of the MCU Monte Carlo software package

    Highlights: • MCU is the Monte Carlo code for particle transport in 3D systems with depletion. • Criticality and fixed source problems are solved using pure point-wise approximation. • MCU is parallelized with MPI in three different modes. • MCU has coolant, fuel and xenon feedback for VVER calculations. • MCU is verified for reactors with thermal, intermediate and fast neutron spectrum. - Abstract: MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented

  7. Shell model Monte Carlo methods

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  8. Kinematics of multigrid Monte Carlo

    We study the kinematics of multigrid Monte Carlo algorithms by means of acceptance rates for nonlocal Metropolis update proposals. An approximation formula for acceptance rates is derived. We present a comparison of different coarse-to-fine interpolation schemes in free field theory, where the formula is exact. The predictions of the approximation formula for several interacting models are well confirmed by Monte Carlo simulations. The following rule is found: For a critical model with fundametal Hamiltonian Η(φ), absence of critical slowing down can only be expected if the expansion of (Η(φ+ψ)) in terms of the shift ψ contains no relevant (mass) term. We also introduce a multigrid update procedure for nonabelian lattice gauge theory and study the acceptance rates for gauge group SU(2) in four dimensions. (orig.)

  9. Efficient Markov chain Monte Carlo implementation of Bayesian analysis of additive and dominance genetic variances in noninbred pedigrees.

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J

    2008-06-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  10. Efficient Markov Chain Monte Carlo Implementation of Bayesian Analysis of Additive and Dominance Genetic Variances in Noninbred Pedigrees

    Waldmann, Patrik; Hallander, Jon; Hoti, Fabian; Sillanpää, Mikko J.

    2008-01-01

    Accurate and fast computation of quantitative genetic variance parameters is of great importance in both natural and breeding populations. For experimental designs with complex relationship structures it can be important to include both additive and dominance variance components in the statistical model. In this study, we introduce a Bayesian Gibbs sampling approach for estimation of additive and dominance genetic variances in the traditional infinitesimal model. The method can handle general pedigrees without inbreeding. To optimize between computational time and good mixing of the Markov chain Monte Carlo (MCMC) chains, we used a hybrid Gibbs sampler that combines a single site and a blocked Gibbs sampler. The speed of the hybrid sampler and the mixing of the single-site sampler were further improved by the use of pretransformed variables. Two traits (height and trunk diameter) from a previously published diallel progeny test of Scots pine (Pinus sylvestris L.) and two large simulated data sets with different levels of dominance variance were analyzed. We also performed Bayesian model comparison on the basis of the posterior predictive loss approach. Results showed that models with both additive and dominance components had the best fit for both height and diameter and for the simulated data with high dominance. For the simulated data with low dominance, we needed an informative prior to avoid the dominance variance component becoming overestimated. The narrow-sense heritability estimates in the Scots pine data were lower compared to the earlier results, which is not surprising because the level of dominance variance was rather high, especially for diameter. In general, the hybrid sampler was considerably faster than the blocked sampler and displayed better mixing properties than the single-site sampler. PMID:18558655

  11. Component analysis of sodium void reactivity of step type FBR cores with group-wise Monte Carlo code 'GMVP'

    Reactivity components composing the sodium void reactivity in a FBR core are analyzed by group-wise Monte Carlo Code GMVP, which has been developed by JAEA. The typical way to analyze the reactivity components is to use the perturbation method based on the diffusion calculations, while the diffusion approximation cannot be appropriately applied to some types of FBR cores containing large cavity regions. But, in order to prospect the optimized FBR core with negative sodium void reactivity, we need to the components of the sodium void reactivity of cores which have a small void reactivity, which cores are sometimes accompanied with adjacent large cavity regions or gas plenum zones. In this study, we have employed GMVP to simulate the cavity region exactly in geometry and to evaluate the neutron behavior rigorously in reactor physics. The cross section library used is JFS-3-J3.3 70 group constant set that is complied from JENDL-3.3 library. The objective core is a 'step type' two zone core, which has a lower inner core height relative to the height of the outer core, and the upper axial blanket is eliminated to enhance the neutron leakage in the upper ward at void conditions. The reactivity component by neutron leakage is derived from the difference of the k-effective of direct calculation of GMVP between the intact and void cores, and that of non-leakage components evaluated by using real and adjoint flux that are calculated with GMVP. In the paper, the change of the contributions of the both components is presented when the core height is changed along with the void reactivity of the cores. (author)

  12. A Comparison of Bayesian Monte Carlo Markov Chain and Maximum Likelihood Estimation Methods for the Statistical Analysis of Geodetic Time Series

    Olivares, G.; Teferle, F. N.

    2013-12-01

    Geodetic time series provide information which helps to constrain theoretical models of geophysical processes. It is well established that such time series, for example from GPS, superconducting gravity or mean sea level (MSL), contain time-correlated noise which is usually assumed to be a combination of a long-term stochastic process (characterized by a power-law spectrum) and random noise. Therefore, when fitting a model to geodetic time series it is essential to also estimate the stochastic parameters beside the deterministic ones. Often the stochastic parameters include the power amplitudes of both time-correlated and random noise, as well as, the spectral index of the power-law process. To date, the most widely used method for obtaining these parameter estimates is based on maximum likelihood estimation (MLE). We present an integration method, the Bayesian Monte Carlo Markov Chain (MCMC) method, which, by using Markov chains, provides a sample of the posteriori distribution of all parameters and, thereby, using Monte Carlo integration, all parameters and their uncertainties are estimated simultaneously. This algorithm automatically optimizes the Markov chain step size and estimates the convergence state by spectral analysis of the chain. We assess the MCMC method through comparison with MLE, using the recently released GPS position time series from JPL and apply it also to the MSL time series from the Revised Local Reference data base of the PSMSL. Although the parameter estimates for both methods are fairly equivalent, they suggest that the MCMC method has some advantages over MLE, for example, without further computations it provides the spectral index uncertainty, is computationally stable and detects multimodality.

  13. Converging Stereotactic Radiotherapy Using Kilovoltage X-Rays: Experimental Irradiation of Normal Rabbit Lung and Dose-Volume Analysis With Monte Carlo Simulation

    Purpose: To validate the feasibility of developing a radiotherapy unit with kilovoltage X-rays through actual irradiation of live rabbit lungs, and to explore the practical issues anticipated in future clinical application to humans through Monte Carlo dose simulation. Methods and Materials: A converging stereotactic irradiation unit was developed, consisting of a modified diagnostic computed tomography (CT) scanner. A tiny cylindrical volume in 13 normal rabbit lungs was individually irradiated with single fractional absorbed doses of 15, 30, 45, and 60 Gy. Observational CT scanning of the whole lung was performed every 2 weeks for 30 weeks after irradiation. After 30 weeks, histopathologic specimens of the lungs were examined. Dose distribution was simulated using the Monte Carlo method, and dose-volume histograms were calculated according to the data. A trial estimation of the effect of respiratory movement on dose distribution was made. Results: A localized hypodense change and subsequent reticular opacity around the planning target volume (PTV) were observed in CT images of rabbit lungs. Dose-volume histograms of the PTVs and organs at risk showed a focused dose distribution to the target and sufficient dose lowering in the organs at risk. Our estimate of the dose distribution, taking respiratory movement into account, revealed dose reduction in the PTV. Conclusions: A converging stereotactic irradiation unit using kilovoltage X-rays was able to generate a focused radiobiologic reaction in rabbit lungs. Dose-volume histogram analysis and estimated sagittal dose distribution, considering respiratory movement, clarified the characteristics of the irradiation received from this type of unit.

  14. Asynchronous Anytime Sequential Monte Carlo

    Paige, Brooks; Wood, Frank; Doucet, Arnaud; Teh, Yee Whye

    2014-01-01

    We introduce a new sequential Monte Carlo algorithm we call the particle cascade. The particle cascade is an asynchronous, anytime alternative to traditional particle filtering algorithms. It uses no barrier synchronizations which leads to improved particle throughput and memory efficiency. It is an anytime algorithm in the sense that it can be run forever to emit an unbounded number of particles while keeping within a fixed memory budget. We prove that the particle cascade is an unbiased mar...

  15. Neural Adaptive Sequential Monte Carlo

    Gu, Shixiang; Ghahramani, Zoubin; Turner, Richard E

    2015-01-01

    Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance sampling-based methods, performance is critically dependent on the proposal distribution: a bad proposal can lead to arbitrarily inaccurate estimates of the target distribution. This paper presents a new method for automatically adapting the proposal using an approximation of the Ku...

  16. Parallel Monte Carlo reactor neutronics

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  17. Adaptive Multilevel Monte Carlo Simulation

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  18. Monomial Gamma Monte Carlo Sampling

    Zhang, Yizhe; Wang, Xiangyu; Chen, Changyou; Fan, Kai; Carin, Lawrence

    2016-01-01

    We unify slice sampling and Hamiltonian Monte Carlo (HMC) sampling by demonstrating their connection under the canonical transformation from Hamiltonian mechanics. This insight enables us to extend HMC and slice sampling to a broader family of samplers, called monomial Gamma samplers (MGS). We analyze theoretically the mixing performance of such samplers by proving that the MGS draws samples from a target distribution with zero-autocorrelation, in the limit of a single parameter. This propert...

  19. TRIPOLI-4{sup ®} Monte Carlo code ITER A-lite neutronic model validation

    Jaboulay, Jean-Charles, E-mail: jean-charles.jaboulay@cea.fr [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Cayla, Pierre-Yves; Fausser, Clement [MILLENNIUM, 16 Av du Québec Silic 628, F-91945 Villebon sur Yvette (France); Damian, Frederic; Lee, Yi-Kang; Puma, Antonella Li; Trama, Jean-Christophe [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France)

    2014-10-15

    3D Monte Carlo transport codes are extensively used in neutronic analysis, especially in radiation protection and shielding analyses for fission and fusion reactors. TRIPOLI-4{sup ®} is a Monte Carlo code developed by CEA. The aim of this paper is to show its capability to model a large-scale fusion reactor with complex neutron source and geometry. A benchmark between MCNP5 and TRIPOLI-4{sup ®}, on the ITER A-lite model was carried out; neutron flux, nuclear heating in the blankets and tritium production rate in the European TBMs were evaluated and compared. The methodology to build the TRIPOLI-4{sup ®} A-lite model is based on MCAM and the MCNP A-lite model. Simplified TBMs, from KIT, were integrated in the equatorial-port. A good agreement between MCNP and TRIPOLI-4{sup ®} is shown; discrepancies are mainly included in the statistical error.

  20. Polarization imaging of multiply-scattered radiation based on integral-vector Monte Carlo method

    A new integral-vector Monte Carlo method (IVMCM) is developed to analyze the transfer of polarized radiation in 3D multiple scattering particle-laden media. The method is based on a 'successive order of scattering series' expression of the integral formulation of the vector radiative transfer equation (VRTE) for application of efficient statistical tools to improve convergence of Monte Carlo calculations of integrals. After validation against reference results in plane-parallel layer backscattering configurations, the model is applied to a cubic container filled with uniformly distributed monodispersed particles and irradiated by a monochromatic narrow collimated beam. 2D lateral images of effective Mueller matrix elements are calculated in the case of spherical and fractal aggregate particles. Detailed analysis of multiple scattering regimes, which are very similar for unpolarized radiation transfer, allows identifying the sensitivity of polarization imaging to size and morphology.

  1. Monte Carlo design of a system for the detection of explosive materials and analysis of the dose; Diseno Monte Carlo de un sistema para la deteccion de materiales explosivos y analisis de la dosis

    Hernandez A, P. L.; Medina C, D.; Rodriguez I, J. L.; Salas L, M. A.; Vega C, H. R., E-mail: pabloyae_2@hotmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    The problems associated with insecurity and terrorism have forced to designing systems for detecting nuclear materials, drugs and explosives that are installed on roads, ports and airports. Organic materials are composed of C, H, O and N; similarly the explosive materials are manufactured which can be distinguished by the concentration of these elements. Its elemental composition, particularly the concentration of hydrogen and oxygen, allow distinguish them from other organic substances. When these materials are irradiated with neutrons nuclear reactions (n, γ) are produced, where the emitted photons are ready gamma rays whose energy is characteristic of each element and its abundance allows estimating their concentration. The aim of this study was designed using Monte Carlo methods a system with neutron source, gamma rays detector and moderator able to distinguish the presence of Rdx and urea. In design were used as moderators: paraffin, light water, polyethylene and graphite; as detectors were used HPGe and the NaI(Tl). The design that showed the best performance was the moderator of light water and HPGe, with a source of {sup 241}AmBe. For this design, the values of ambient dose equivalent around the system were calculated. (Author)

  2. Extending canonical Monte Carlo methods

    Velazquez, L.; Curilef, S.

    2010-02-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.

  3. Non statistical Monte-Carlo

    We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems

  4. Development of core design/analysis technology for integral reactor; verification of SMART nuclear design by Monte Carlo method

    Kim, Chang Hyo; Hong, In Seob; Han, Beom Seok; Jeong, Jong Seong [Seoul National University, Seoul (Korea)

    2002-03-01

    The objective of this project is to verify neutronics characteristics of the SMART core design as to compare computational results of the MCNAP code with those of the MASTER code. To achieve this goal, we will analyze neutronics characteristics of the SMART core using the MCNAP code and compare these results with results of the MASTER code. We improved parallel computing module and developed error analysis module of the MCNAP code. We analyzed mechanism of the error propagation through depletion computation and developed a calculation module for quantifying these errors. We performed depletion analysis for fuel pins and assemblies of the SMART core. We modeled a 3-D structure of the SMART core and considered a variation of material compositions by control rods operation and performed depletion analysis for the SMART core. We computed control-rod worths of assemblies and a reactor core for operation of individual control-rod groups. We computed core reactivity coefficients-MTC, FTC and compared these results with computational results of the MASTER code. To verify error analysis module of the MCNAP code, we analyzed error propagation through depletion of the SMART B-type assembly. 18 refs., 102 figs., 36 tabs. (Author)

  5. Monte Carlo Simulation of an American Option

    Gikiri Thuo

    2007-04-01

    Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.

  6. Monte Carlo 2000 Conference : Advanced Monte Carlo for Radiation Physics, Particle Transport Simulation and Applications

    Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro

    2001-01-01

    This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.

  7. Monte Carlo modelling for individual monitoring

    procedures) Monte Carlo modelling plays a fundamental role in characterizing the photon irradiation fields (direct and backscattered components analysis). Photon spectrum modification due to the presence of the calibration phantom as a function of both the incident angle and of the measurement position on the phantom face can be studied in detail by numerical simulations. These results give the possibility to optimize the dosemeter design and to test air kerma homogeneity regions on the calibration phantom surface (according to the ISO regulation). All the logical flow diagram of the type test procedure will be explained pointing out the role of each Monte Carlo computed parameter. Finally Monte Carlo studies for the characterization of neutron irradiation halls are concisely outlined, demonstrating the importance of a reliable description of the irradiation experiences for the calibration of personal dosemeters. Numerical simulations coupled with experiment allow separating the direct and scattered contributions to the detector and to determine their energy spectra. Furthermore the simulations can provide suitable information on the true thermal neutron fluence outside a thermal neutron facility in presence of a calibration phantom (calibrations in terms of Hp(10) for thermal neutrons). (author)

  8. Monte Carlo capabilities of the SCALE code system

    Highlights: • Foundational Monte Carlo capabilities of SCALE are described. • Improvements in continuous-energy treatments are detailed. • New methods for problem-dependent temperature corrections are described. • New methods for sensitivity analysis and depletion are described. • Nuclear data, users interfaces, and quality assurance activities are summarized. - Abstract: SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2

  9. Parallel implementation of the Monte Carlo transport code EGS4 on the hypercube

    Monte Carlo transport codes are commonly used in the study of particle interactions. The CALOR89 code system is a combination of several Monte Carlo transport and analysis programs. In order to produce good results, a typical Monte Carlo run will have to produce many particle histories. On a single processor computer, the transport calculation can take a huge amount of time. However, if the transport of particles were divided among several processors in a multiprocessor machine, the time can be drastically reduced

  10. Forward physics Monte Carlo (FPMC)

    Boonekamp, M.; Juránek, Vojtěch; Kepka, Oldřich; Royon, C.

    Hamburg : Verlag Deutsches Elektronen-Synchrotron, 2009 - (Jung, H.; De Roeck, A.), s. 758-762 ISBN N. [HERA and the LHC workshop series on the implications of HERA for LHC physics. Geneve (CH), 26.05.2008-30.05.2008] R&D Projects: GA MŠk LC527; GA MŠk LA08032 Institutional research plan: CEZ:AV0Z10100502 Keywords : forward physics * diffraction * two-photon * Monte Carlo Subject RIV: BF - Elementary Particles and High Energy Physics http://arxiv.org/PS_cache/arxiv/pdf/0903/0903.3861v2.pdf

  11. Monte Carlo techniques in radiation therapy

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  12. Monte Carlo primer for health physicists

    The basic ideas and principles of Monte Carlo calculations are presented in the form of a primer for health physicists. A simple integral with a known answer is evaluated by two different Monte Carlo approaches. Random number, which underlie Monte Carlo work, are discussed, and a sample table of random numbers generated by a hand calculator is presented. Monte Carlo calculations of dose and linear energy transfer (LET) from 100-keV neutrons incident on a tissue slab are discussed. The random-number table is used in a hand calculation of the initial sequence of events for a 100-keV neutron entering the slab. Some pitfalls in Monte Carlo work are described. While this primer addresses mainly the bare bones of Monte Carlo, a final section briefly describes some of the more sophisticated techniques used in practice to reduce variance and computing time

  13. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics

    A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the keff and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport

  14. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics.

    Seker, V.; Thomas, J. W.; Downar, T. J.; Purdue Univ.

    2007-01-01

    A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k{sub eff} and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic

  15. A hybrid Monte Carlo and response matrix Monte Carlo method in criticality calculation

    Full core calculations are very useful and important in reactor physics analysis, especially in computing the full core power distributions, optimizing the refueling strategies and analyzing the depletion of fuels. To reduce the computing time and accelerate the convergence, a method named Response Matrix Monte Carlo (RMMC) method based on analog Monte Carlo simulation was used to calculate the fixed source neutron transport problems in repeated structures. To make more accurate calculations, we put forward the RMMC method based on non-analog Monte Carlo simulation and investigate the way to use RMMC method in criticality calculations. Then a new hybrid RMMC and MC (RMMC+MC) method is put forward to solve the criticality problems with combined repeated and flexible geometries. This new RMMC+MC method, having the advantages of both MC method and RMMC method, can not only increase the efficiency of calculations, also simulate more complex geometries rather than repeated structures. Several 1-D numerical problems are constructed to test the new RMMC and RMMC+MC method. The results show that RMMC method and RMMC+MC method can efficiently reduce the computing time and variations in the calculations. Finally, the future research directions are mentioned and discussed at the end of this paper to make RMMC method and RMMC+MC method more powerful. (authors)

  16. Interacting Particle Markov Chain Monte Carlo

    Rainforth, Tom; Naesseth, Christian A.; Lindsten, Fredrik; Paige, Brooks; van de Meent, Jan-Willem; Doucet, Arnaud; Wood, Frank

    2016-01-01

    We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method that introduces a coupling between multiple standard and conditional sequential Monte Carlo samplers. Like related methods, iPMCMC is a Markov chain Monte Carlo sampler on an extended space. We present empirical results that show significant improvements in mixing rates relative to both non-interacting PMCMC samplers and a single PMCMC sampler with an equivalent total computational budget. An additional advant...

  17. Mean field simulation for Monte Carlo integration

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  18. Benchmarking of proton transport in Super Monte Carlo simulation program

    Full text of the publication follows. The Monte Carlo (MC) method has been traditionally applied in nuclear design and analysis due to its capability of dealing with complicated geometries and multi-dimensional physics problems as well as obtaining accurate results. The Super Monte Carlo Simulation Program (SuperMC) is developed by FDS Team in China for fusion, fission, and other nuclear applications. The simulations of radiation transport, isotope burn-up, material activation, radiation dose, and biology damage could be performed using SuperMC. Complicated geometries and the whole physical process of various types of particles in broad energy scale can be well handled. Bi-directional automatic conversion between general CAD models and full-formed input files of SuperMC is supported by MCAM, which is a CAD/image-based automatic modeling program for neutronics and radiation transport simulation. Mixed visualization of dynamical 3D dataset and geometry model is supported by RVIS, which is a nuclear radiation virtual simulation and assessment system. Continuous-energy cross section data from hybrid evaluated nuclear data library HENDL are utilized to support simulation. Neutronic fixed source and critical design parameters calculates for reactors of complex geometry and material distribution based on the transport of neutron and photon have been achieved in our former version of SuperMC. Recently, the proton transport has also been integrated in SuperMC in the energy region up to 10 GeV. The physical processes considered for proton transport include electromagnetic processes and hadronic processes. The electromagnetic processes include ionization, multiple scattering, Bremsstrahlung, and pair production processes. Public evaluated data from HENDL are used in some electromagnetic processes. In hadronic physics, the Bertini intra-nuclear cascade model with excitons, preequilibrium model, nucleus explosion model, fission model, and evaporation model are incorporated to

  19. Application of equivalence methods on Monte Carlo method based homogenization multi-group constants

    The multi-group constants generated via continuous energy Monte Carlo method do not satisfy the equivalence between reference calculation and diffusion calculation applied in reactor core analysis. To the satisfaction of the equivalence theory, general equivalence theory (GET) and super homogenization method (SPH) were applied to the Monte Carlo method based group constants, and a simplified reactor core and C5G7 benchmark were examined with the Monte Carlo constants. The results show that the calculating precision of group constants is improved, and GET and SPH are good candidates for the equivalence treatment of Monte Carlo homogenization. (authors)

  20. Introduction to Monte Carlo methods: sampling techniques and random numbers

    The Monte Carlo method describes a very broad area of science, in which many processes, physical systems and phenomena that are statistical in nature and are difficult to solve analytically are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions. As the number of individual events (called histories) is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. Assuming that the behavior of physical system can be described by probability density functions, then the Monte Carlo simulation can proceed by sampling from these probability density functions, which necessitates a fast and effective way to generate random numbers uniformly distributed on the interval (0,1). Particles are generated within the source region and are transported by sampling from probability density functions through the scattering media until they are absorbed or escaped the volume of interest. The outcomes of these random samplings or trials, must be accumulated or tallied in an appropriate manner to produce the desired result, but the essential characteristic of Monte Carlo is the use of random sampling techniques to arrive at a solution of the physical problem. The major components of Monte Carlo methods for random sampling for a given event are described in the paper

  1. Parallel MCNP Monte Carlo transport calculations with MPI

    The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected

  2. Multidimensional stochastic approximation Monte Carlo.

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  3. Single scatter electron Monte Carlo

    Svatos, M.M. [Lawrence Livermore National Lab., CA (United States)|Wisconsin Univ., Madison, WI (United States)

    1997-03-01

    A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.

  4. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  5. Monte Carlo Application ToolKit (MCATK)

    Highlights: • Component-based Monte Carlo radiation transport parallel software library. • Designed to build specialized software applications. • Provides new functionality for existing general purpose Monte Carlo transport codes. • Time-independent and time-dependent algorithms with population control. • Algorithm verification and validation results are provided. - Abstract: The Monte Carlo Application ToolKit (MCATK) is a component-based software library designed to build specialized applications and to provide new functionality for existing general purpose Monte Carlo radiation transport codes. We will describe MCATK and its capabilities along with presenting some verification and validations results

  6. Deterministic and Monte Carlo transport models with thermal-hydraulic feedback

    Seubert, A.; Langenbuch, S.; Velkov, K.; Zwermann, W. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Garching (Germany)

    2008-07-01

    This paper gives an overview of recent developments concerning deterministic transport and Monte Carlo methods with thermal-hydraulic feedback. The timedependent 3D discrete ordinates transport code TORT-TD allows pin-by-pin analyses of transients using few energy groups and anisotropic scattering by solving the timedependent transport equation using the unconditionally stable implicit method. To account for thermal-hydraulic feedback, TORT-TD has been coupled with the system code ATHLET. Applications to, e.g., a control rod ejection in a 2 x 2 PWR fuel assembly arrangement demonstrate the applicability of the coupled code TORT-TD/ATHLET for test cases. For Monte Carlo steady-state calculations with nuclear point data and thermalhydraulic feedback, MCNP has been prepared to incorporate thermal-hydraulic parameters. As test case has been chosen the uncontrolled steady state of the 2 x 2 PWR fuel assembly arrangement for which the thermal-hydraulic parameter distribution has been obtained from a preceding coupled TORT-TD/ATHLET analysis. The result demonstrates the applicability of MCNP to problems with spatial distributions of thermal-fluiddynamic parameters. The comparison with MCNP results confirms that the accuracy of deterministic transport calculations with pin-wise homogenised few-group cross sections is comparable to Monte Carlo simulations. The presented cases are considered as a pre-stage of performing calculations of larger configurations like a quarter core which is in preparation. (orig.)

  7. Deterministic and Monte Carlo transport models with thermal-hydraulic feedback

    This paper gives an overview of recent developments concerning deterministic transport and Monte Carlo methods with thermal-hydraulic feedback. The timedependent 3D discrete ordinates transport code TORT-TD allows pin-by-pin analyses of transients using few energy groups and anisotropic scattering by solving the timedependent transport equation using the unconditionally stable implicit method. To account for thermal-hydraulic feedback, TORT-TD has been coupled with the system code ATHLET. Applications to, e.g., a control rod ejection in a 2 x 2 PWR fuel assembly arrangement demonstrate the applicability of the coupled code TORT-TD/ATHLET for test cases. For Monte Carlo steady-state calculations with nuclear point data and thermalhydraulic feedback, MCNP has been prepared to incorporate thermal-hydraulic parameters. As test case has been chosen the uncontrolled steady state of the 2 x 2 PWR fuel assembly arrangement for which the thermal-hydraulic parameter distribution has been obtained from a preceding coupled TORT-TD/ATHLET analysis. The result demonstrates the applicability of MCNP to problems with spatial distributions of thermal-fluiddynamic parameters. The comparison with MCNP results confirms that the accuracy of deterministic transport calculations with pin-wise homogenised few-group cross sections is comparable to Monte Carlo simulations. The presented cases are considered as a pre-stage of performing calculations of larger configurations like a quarter core which is in preparation. (orig.)

  8. Particle in cell/Monte Carlo collision analysis of the problem of identification of impurities in the gas by the plasma electron spectroscopy method

    Kusoglu Sarikaya, C.; Rafatov, I.; Kudryavtsev, A. A.

    2016-06-01

    The work deals with the Particle in Cell/Monte Carlo Collision (PIC/MCC) analysis of the problem of detection and identification of impurities in the nonlocal plasma of gas discharge using the Plasma Electron Spectroscopy (PLES) method. For this purpose, 1d3v PIC/MCC code for numerical simulation of glow discharge with nonlocal electron energy distribution function is developed. The elastic, excitation, and ionization collisions between electron-neutral pairs and isotropic scattering and charge exchange collisions between ion-neutral pairs and Penning ionizations are taken into account. Applicability of the numerical code is verified under the Radio-Frequency capacitively coupled discharge conditions. The efficiency of the code is increased by its parallelization using Open Message Passing Interface. As a demonstration of the PLES method, parallel PIC/MCC code is applied to the direct current glow discharge in helium doped with a small amount of argon. Numerical results are consistent with the theoretical analysis of formation of nonlocal EEDF and existing experimental data.

  9. Statistical weights as variance reduction method in back-scattered gamma radiation Monte Carlo spectrometry analysis of thickness gauge detector response

    The possibility of determining physical quantities (such as the number of particles behind shields of given thickness, energy spectra, detector responses, etc.) with a satisfactory statistical uncertainty, in a relatively short computing time, can be used as a measure of the efficiency of a Monte Carlo method. The numerical simulation of rare events with a straightforward Monte Carlo method is inefficient due to the great number of histories, without scores. In this paper, for the specific geometry of a gamma backscattered thickness gauge, with 60Co and 137Cs as gamma sources, the back-scattered gamma spectrum, probabilities for back-scattering and the spectral characteristics of the detector response were determined using a nonanalog Monte Carlo game with statistical weights applied. (author)

  10. Optimum and efficient sampling for variational quantum Monte Carlo

    Trail, John Robert; 10.1063/1.3488651

    2010-01-01

    Quantum mechanics for many-body systems may be reduced to the evaluation of integrals in 3N dimensions using Monte-Carlo, providing the Quantum Monte Carlo ab initio methods. Here we limit ourselves to expectation values for trial wavefunctions, that is to Variational quantum Monte Carlo. Almost all previous implementations employ samples distributed as the physical probability density of the trial wavefunction, and assume the Central Limit Theorem to be valid. In this paper we provide an analysis of random error in estimation and optimisation that leads naturally to new sampling strategies with improved computational and statistical properties. A rigorous lower limit to the random error is derived, and an efficient sampling strategy presented that significantly increases computational efficiency. In addition the infinite variance heavy tailed random errors of optimum parameters in conventional methods are replaced with a Normal random error, strengthening the theoretical basis of optimisation. The method is ...

  11. Monte Carlo simulation in statistical physics an introduction

    Binder, Kurt

    1992-01-01

    The Monte Carlo method is a computer simulation method which uses random numbers to simulate statistical fluctuations The method is used to model complex systems with many degrees of freedom Probability distributions for these systems are generated numerically and the method then yields numerically exact information on the models Such simulations may be used tosee how well a model system approximates a real one or to see how valid the assumptions are in an analyical theory A short and systematic theoretical introduction to the method forms the first part of this book The second part is a practical guide with plenty of examples and exercises for the student Problems treated by simple sampling (random and self-avoiding walks, percolation clusters, etc) are included, along with such topics as finite-size effects and guidelines for the analysis of Monte Carlo simulations The two parts together provide an excellent introduction to the theory and practice of Monte Carlo simulations

  12. Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine

    Coulot, J

    2003-08-07

    remarks to be made, about the goal and general organization of the discussion. First, the book could not be considered to be strictly about the Monte Carlo method, but maybe also internal dosimetry and related Monte Carlo issues. Then, it must be noted that the discussion would sometimes have been clearer if SI units had been used instead of rad, or mCi, especially for European readers. There are some confusing features, which could lead to misconceptions, since sometimes authors refer to treatment planning softwares as Monte Carlo codes. If the precious contribution of a software like MIRDOSE to the field of radiation protection dosimetry must be underlined, it should not be considered, strictly speaking, as a Monte Carlo code. It would have been more interesting and relevant to provide a more exhaustive review of Monte Carlo codes (history of the code, transport algorithm, pros and cons), and to make a separate chapter for treatment planning and radiation protection softwares (3D-ID, MABDOS, MIRDOSE3) which are of clinical routine interest. However, this book is very interesting, of practical interest, and it should have its utility in all modern nuclear medicine departments interested in dosimetry, providing up-to-date data and references. It should be viewed as a good and well-documented handbook, or as a general introduction for beginners and students. (book review)

  13. Monte Carlo analysis of the long-lived fission product neutron capture rates at the Transmutation by Adiabatic Resonance Crossing (TARC) experiment

    Abanades, A., E-mail: abanades@etsii.upm.es [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Alvarez-Velarde, F.; Gonzalez-Romero, E.M. [Centro de Investigaciones Medioambientales y Tecnologicas (CIEMAT), Avda. Complutense, 40, Ed. 17, 28040 Madrid (Spain); Ismailov, K. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Lafuente, A. [Grupo de Modelizacion de Sistemas Termoenergeticos, ETSII, Universidad Politecnica de Madrid, c/Ramiro de Maeztu, 7, 28040 Madrid (Spain); Nishihara, K. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan); Saito, M. [Tokyo Institute of Technology, 2-12-1, O-okayama, Meguro-ku, Tokyo 152-8550 (Japan); Stanculescu, A. [International Atomic Energy Agency (IAEA), Vienna (Austria); Sugawara, T. [Transmutation Section, J-PARC Center, JAEA, Tokai-mura, Ibaraki-ken 319-1195 (Japan)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer TARC experiment benchmark capture rates results. Black-Right-Pointing-Pointer Utilization of updated databases, included ADSLib. Black-Right-Pointing-Pointer Self-shielding effect in reactor design for transmutation. Black-Right-Pointing-Pointer Effect of Lead nuclear data. - Abstract: The design of Accelerator Driven Systems (ADS) requires the development of simulation tools that are able to describe in a realistic way their nuclear performance and transmutation rate capability. In this publication, we present an evaluation of state of the art Monte Carlo design tools to assess their performance concerning transmutation of long-lived fission products. This work, performed under the umbrella of the International Atomic Energy Agency, analyses two important aspects for transmutation systems: moderation on Lead and neutron captures of {sup 99}Tc, {sup 127}I and {sup 129}I. The analysis of the results shows how shielding effects due to the resonances at epithermal energies of these nuclides affects strongly their transmutation rate. The results suggest that some research effort should be undertaken to improve the quality of Iodine nuclear data at epithermal and fast neutron energy to obtain a reliable transmutation estimation.

  14. Diffusion of oxygen interstitials in UO2+x using kinetic Monte Carlo simulations: Role of O/M ratio and sensitivity analysis

    Behera, Rakesh K.; Watanabe, Taku; Andersson, David A.; Uberuaga, Blas P.; Deo, Chaitanya S.

    2016-04-01

    Oxygen interstitials in UO2+x significantly affect the thermophysical properties and microstructural evolution of the oxide nuclear fuel. In hyperstoichiometric Urania (UO2+x), these oxygen interstitials form different types of defect clusters, which have different migration behavior. In this study we have used kinetic Monte Carlo (kMC) to evaluate diffusivities of oxygen interstitials accounting for mono- and di-interstitial clusters. Our results indicate that the predicted diffusivities increase significantly at higher non-stoichiometry (x > 0.01) for di-interstitial clusters compared to a mono-interstitial only model. The diffusivities calculated at higher temperatures compare better with experimental values than at lower temperatures (analysis to estimate the effect of input di-interstitial binding energies on the predicted diffusivities and activation energies. While this article only discusses mono- and di-interstitials in evaluating oxygen diffusion response in UO2+x, future improvements to the model will primarily focus on including energetic definitions of larger stable interstitial clusters reported in the literature. The addition of larger clusters to the kMC model is expected to improve the comparison of oxygen transport in UO2+x with experiment.

  15. Analysis of the influence of germanium dead layer on detector calibration simulation for environmental radioactive samples using the Monte Carlo method

    Rodenas, J. E-mail: jrodenas@iqn.upv.es; Pascual, A.; Zarza, I.; Serradell, V.; Ortiz, J.; Ballesteros, L

    2003-01-11

    Germanium crystals have a dead layer that causes a decrease in efficiency, since the layer is not useful for detection, but strongly attenuates photons. The thickness of this inactive layer is not well known due to the existence of a transition zone where photons are increasingly absorbed. Therefore, using data provided by manufacturers in the detector simulation model, some strong discrepancies appear between calculated and measured efficiencies. The Monte Carlo method is applied to simulate the calibration of a HP Ge detector in order to determine the total inactive germanium layer thickness and the active volume that are needed in order to obtain the minimum discrepancy between estimated and experimental efficiency. Calculations and measurements were performed for all of the radionuclides included in a standard calibration gamma cocktail solution. A Marinelli beaker was considered for this analysis, as it is one of the most commonly used sample container for environmental radioactivity measurements. Results indicated that a good agreement between calculated and measured efficiencies is obtained using a value for the inactive germanium layer thickness equal to approximately twice the value provided by the detector manufacturer. For all energy peaks included in the calibration, the best agreement with experimental efficiency was found using a combination of a small thickness of the inactive germanium layer and a small detection active volume.

  16. Analysis of the influence of germanium dead layer on detector calibration simulation for environmental radioactive samples using the Monte Carlo method

    Ródenas, J.; Pascual, A.; Zarza, I.; Serradell, V.; Ortiz, J.; Ballesteros, L.

    2003-01-01

    Germanium crystals have a dead layer that causes a decrease in efficiency, since the layer is not useful for detection, but strongly attenuates photons. The thickness of this inactive layer is not well known due to the existence of a transition zone where photons are increasingly absorbed. Therefore, using data provided by manufacturers in the detector simulation model, some strong discrepancies appear between calculated and measured efficiencies. The Monte Carlo method is applied to simulate the calibration of a HP Ge detector in order to determine the total inactive germanium layer thickness and the active volume that are needed in order to obtain the minimum discrepancy between estimated and experimental efficiency. Calculations and measurements were performed for all of the radionuclides included in a standard calibration gamma cocktail solution. A Marinelli beaker was considered for this analysis, as it is one of the most commonly used sample container for environmental radioactivity measurements. Results indicated that a good agreement between calculated and measured efficiencies is obtained using a value for the inactive germanium layer thickness equal to approximately twice the value provided by the detector manufacturer. For all energy peaks included in the calibration, the best agreement with experimental efficiency was found using a combination of a small thickness of the inactive germanium layer and a small detection active volume.

  17. The effects of LIGO detector noise on a 15-dimensional Markov-chain Monte Carlo analysis of gravitational-wave signals

    Gravitational-wave signals from inspirals of binary compact objects (black holes and neutron stars) are primary targets of the ongoing searches by ground-based gravitational-wave (GW) interferometers (LIGO, Virgo and GEO-600). We present parameter estimation results from our Markov-chain Monte Carlo code SPINspiral on signals from binaries with precessing spins. Two data sets are created by injecting simulated GW signals either into synthetic Gaussian noise or into LIGO detector data. We compute the 15-dimensional probability-density functions (PDFs) for both data sets, as well as for a data set containing LIGO data with a known, loud artefact ('glitch'). We show that the analysis of the signal in detector noise yields accuracies similar to those obtained using simulated Gaussian noise. We also find that while the Markov chains from the glitch do not converge, the PDFs would look consistent with a GW signal present in the data. While our parameter estimation results are encouraging, further investigations into how to differentiate an actual GW signal from noise are necessary.

  18. Analysis of intervention strategies for inhalation exposure to polycyclic aromatic hydrocarbons and associated lung cancer risk based on a Monte Carlo population exposure assessment model.

    Bin Zhou

    Full Text Available It is difficult to evaluate and compare interventions for reducing exposure to air pollutants, including polycyclic aromatic hydrocarbons (PAHs, a widely found air pollutant in both indoor and outdoor air. This study presents the first application of the Monte Carlo population exposure assessment model to quantify the effects of different intervention strategies on inhalation exposure to PAHs and the associated lung cancer risk. The method was applied to the population in Beijing, China, in the year 2006. Several intervention strategies were designed and studied, including atmospheric cleaning, smoking prohibition indoors, use of clean fuel for cooking, enhancing ventilation while cooking and use of indoor cleaners. Their performances were quantified by population attributable fraction (PAF and potential impact fraction (PIF of lung cancer risk, and the changes in indoor PAH concentrations and annual inhalation doses were also calculated and compared. The results showed that atmospheric cleaning and use of indoor cleaners were the two most effective interventions. The sensitivity analysis showed that several input parameters had major influence on the modeled PAH inhalation exposure and the rankings of different interventions. The ranking was reasonably robust for the remaining majority of parameters. The method itself can be extended to other pollutants and in different places. It enables the quantitative comparison of different intervention strategies and would benefit intervention design and relevant policy making.

  19. Monte Carlo shielding analysis using deep penetration biasing schemes combined with point estimators and algorithms for the scoring of sensitivity profiles and finite perturbation effects

    The first part of the paper contains a review of two Monte Carlo perturbation and sensitivity methods and includes an error analysis of the estimators. By the use of the Neumann series it can be proved that both methods are based on closely related sampling schemes in which the Taylor series approach is a first-order approximation of correlated tracking. They make possible, with equivalent programming and computing effort, the simultaneous calculation of different types of perturbations and sensitivity profiles. The fact that correlated tracking is not limited to first-order effects makes it attractive for all applications in which larger perturbations have to be considered, provided that certain restrictions caused by singularities of the variance are being taken into account. New developments in combining the sensitivity and perturbation algorithm with a (deep penetration) biasing scheme and point estimators are discussed and illustrated by typical applications to design studies. In particular, neutron-streaming through annular gaps around tubes in the sacrificial shield of a BWR was analysed and compared with measurements obtained from the Caorso (840 MW) Nuclear Power Plant. (author)

  20. An Analysis on the Radioactivity Uncertainty Caused by Monte Carlo Stochastic Errors Using Sampling Based Method for the Accelerator Activation Problem

    In this study, to estimate the uncertainty caused by the MC stochastic error, the sampling based sensitivity and uncertainty method is introduced. After the estimation procedure was constructed, the activation analyses were performed for the βNMR (beta-radiation-detected Nuclear Magnetic Resonance) facility. In this study, procedure and program to analyze the activity uncertainty caused by stochastic error of MC method were developed. Using the flux and standard deviation information in MC transport output, 300 randomly sampled flux sets were produced to calculate the uncertainty. It is widely used to couple Monte Carlo (MC) transport code and activation code for the activation analysis of the accelerators. MC method is a stochastic approach for the particle transport; hence, the stochastic errors are always included in the MC transport results. Using the developed procedure, the air activation calculation in βNMR facility was performed. From the results, the major nuclide affected by the flux uncertainty was analyzed. Also, the guideline on the number of particle history is proposed to have a reliable result of the activation. The developed method and procedure will contribute to increasing the accuracy and reliability on the activation calculation

  1. A Monte Carlo Analysis of the Thrust Imbalance for the Space Launch System Booster During Both the Ignition Transient and Steady State Operation

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle.

  2. A Monte Carlo Analysis of the Thrust Imbalance for the RSRMV Booster During Both the Ignition Transient and Steady State Operation

    Foster, Winfred A., Jr.; Crowder, Winston; Steadman, Todd E.

    2014-01-01

    This paper presents the results of statistical analyses performed to predict the thrust imbalance between two solid rocket motor boosters to be used on the Space Launch System (SLS) vehicle. Two legacy internal ballistics codes developed for the Space Shuttle program were coupled with a Monte Carlo analysis code to determine a thrust imbalance envelope for the SLS vehicle based on the performance of 1000 motor pairs. Thirty three variables which could impact the performance of the motors during the ignition transient and thirty eight variables which could impact the performance of the motors during steady state operation of the motor were identified and treated as statistical variables for the analyses. The effects of motor to motor variation as well as variations between motors of a single pair were included in the analyses. The statistical variations of the variables were defined based on data provided by NASA's Marshall Space Flight Center for the upgraded five segment booster and from the Space Shuttle booster when appropriate. The results obtained for the statistical envelope are compared with the design specification thrust imbalance limits for the SLS launch vehicle

  3. Research on GPU Acceleration for Monte Carlo Criticality Calculation

    Xu, Qi; Yu, Ganglin; Wang, Kan

    2014-06-01

    The Monte Carlo neutron transport method can be naturally parallelized by multi-core architectures due to the dependency between particles during the simulation. The GPU+CPU heterogeneous parallel mode has become an increasingly popular way of parallelism in the field of scientific supercomputing. Thus, this work focuses on the GPU acceleration method for the Monte Carlo criticality simulation, as well as the computational efficiency that GPUs can bring. The "neutron transport step" is introduced to increase the GPU thread occupancy. In order to test the sensitivity of the MC code's complexity, a 1D one-group code and a 3D multi-group general purpose code are respectively transplanted to GPUs, and the acceleration effects are compared. The result of numerical experiments shows considerable acceleration effect of the "neutron transport step" strategy. However, the performance comparison between the 1D code and the 3D code indicates the poor scalability of MC codes on GPUs.

  4. KENO, Multigroup P1 Scattering Monte-Carlo Transport Calculation for Criticality, Keff, Flux in 3-D. KENO-5, SCALE-1 Module with Pn Scattering, Super-grouping, Diffusion Albedo Reflection

    1 - Description of problem or function: KENO is a multigroup, Monte Carlo criticality code containing a special geometry package which allows easy description of systems composed of cylinders, spheres, and cuboids (rectangular parallelepipeds) arranged in any order with only one restriction. They cannot be rotated or translated. Each geometrical region must be described as completely enclosing all regions interior to it. For systems not describable using this special geometry package, the program can use the generalized geometry package (GEOM) developed for the O5R Monte Carlo code. It allows any system that can be described by a collection of planes and/or quadratic surfaces, arbitrarily oriented and intersecting in arbitrary fashion. The entire problem can be mocked up in generalized geometry, or one generalized geometry unit or box type can be used alone or in combination with standard KENO units or box types. Rectangular arrays of fissile units are allowed with or without external reflector regions. Output from KENO consists of keff for the system plus an estimate of its standard deviation and the leakage, absorption, and fissions for each energy group plus the totals for all groups. Flux as a function of energy group and region and fission densities as a function of region are optional output. KENO-4: Added features include a neutron balance edit, PICTURE routines to check the input geometry, and a random number sequencing subroutine written in FORTRAN-4. 2 - Method of solution: The scattering treatment used in KENO assumes that the differential neutron scattering cross section can be represented by a P1 Legendre polynomial. Absorption of neutrons in KENO is not allowed. Instead, at each collision point of a neutron tracking history the weight of the neutron is reduced by the absorption probability. When the neutron weight has been reduced below a specified point for the region in which the collision occurs, Russian roulette is played to determine if the

  5. Monte Carlo Form-Finding Method for Tensegrity Structures

    Li, Yue; Feng, Xi-Qiao; Cao, Yan-Ping

    2010-05-01

    In this paper, we propose a Monte Carlo-based approach to solve tensegrity form-finding problems. It uses a stochastic procedure to find the deterministic equilibrium configuration of a tensegrity structure. The suggested Monte Carlo form-finding (MCFF) method is highly efficient because it does not involve complicated matrix operations and symmetry analysis and it works for arbitrary initial configurations. Both regular and non-regular tensegrity problems of large scale can be solved. Some representative examples are presented to demonstrate the efficiency and accuracy of this versatile method.

  6. MONTE-CARLO SIMULATION OF ROAD TRANSPORT EMISSION

    Adam Torok

    2015-09-01

    Full Text Available There are microscopic, mezoscopic and macroscopic models in road traffic analysis and forecasting. From microscopic models one can calculate the macroscopic data by aggregation. The following paper describes the disaggregation method of macroscopic state, which could lead to microscopic properties of traffic. In order to ensure the transform between macroscopic and microscopic states Monte-Carlo simulation was used. MS Excel macro environment was built to run Monte-Carlo simulation. With this method the macroscopic data can be disaggregated to macroscopic data and as a byproduct mezoscopic, regional data can be gained. These mezoscopic data can be used further on regional environmental or transport policy assessment.

  7. General Monte Carlo code MONK

    The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)

  8. Monte Carlo lattice program KIM

    The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed

  9. Monte Carlo application tool-kit (MCATK)

    The Monte Carlo Application tool-kit (MCATK) is a C++ component-based software library designed to build specialized applications and to provide new functionality for existing general purpose Monte Carlo radiation transport codes such as MCNP. We will describe MCATK and its capabilities along with presenting some verification and validations results. (authors)

  10. Fission Matrix Capability for MCNP Monte Carlo

    Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  11. Common misconceptions in Monte Carlo particle transport

    Booth, Thomas E., E-mail: teb@lanl.gov [LANL, XCP-7, MS F663, Los Alamos, NM 87545 (United States)

    2012-07-15

    Monte Carlo particle transport is often introduced primarily as a method to solve linear integral equations such as the Boltzmann transport equation. This paper discusses some common misconceptions about Monte Carlo methods that are often associated with an equation-based focus. Many of the misconceptions apply directly to standard Monte Carlo codes such as MCNP and some are worth noting so that one does not unnecessarily restrict future methods. - Highlights: Black-Right-Pointing-Pointer Adjoint variety and use from a Monte Carlo perspective. Black-Right-Pointing-Pointer Misconceptions and preconceived notions about statistical weight. Black-Right-Pointing-Pointer Reasons that an adjoint based weight window sometimes works well or does not. Black-Right-Pointing-Pointer Pulse height/probability of initiation tallies and 'the' transport equation. Black-Right-Pointing-Pointer Highlights unnecessary preconceived notions about Monte Carlo transport.

  12. Enhanced Monte Carlo Singular System Analysis and Detection of Period 7.8 Years Oscillatory Modes in the Monthly NAO Index and Temperature Records

    Paluš, Milan; Novotná, Dagmar

    2004-01-01

    Roč. 11, - (2004), s. 721-729. ISSN 1023-5809 R&D Projects: GA AV ČR IAA3042401 Keywords : signal detecting * Monte Carlo SSA * period 7.8 years cycles * temperature * NAO Subject RIV: BA - General Mathematics Impact factor: 1.324, year: 2004

  13. Monte Carlo simulations of neoclassical transport in toroidal plasmas

    FORTEC-3D code, which solves the drift-kinetic equation for torus plasmas and radial electric field using the δf Monte Carlo method, has developed to study the variety of issues relating to neoclassical transport phenomena in magnetic confinement plasmas. Here the numerical techniques used in FORTEC-3D are reviewed, and resent progress in the simulation method to simulate GAM oscillation is also explained. A band-limited white noise term is introduced in the equation of time evolution of radial electric field to excite GAM oscillation, which enables us to analyze GAM frequency using FORTEC-3D even in the case the collisionless GAM damping is fast. (author)

  14. The structure of the muscle protein complex 4Ca2+. Tronponin C*troponin: A Monte Carlo modeling analysis of small-angle X-ray and neutron scattering data

    Analysis of scattering data based on a Monte Carlo integration method was used to obtain a low resolution model of the 4Ca2+.troponin c.troponin I complex. This modeling method allows rapid testing of plausible structures where the best fit model can be ascertained by a comparison between model structure scattering profiles and measured scattering data. In the best fit model, troponin I appears as a spiral structure that wraps about 4CA2+.trophonin C which adopts an extended dumbell conformation similar to that observed in the crystal structures of troponin C. The Monte Carlo modeling method can be applied to other biological systems in which detailed structural information is lacking

  15. Monte Carlo shipping cask calculations using an automated biasing procedure

    This paper describes an automated biasing procedure for Monte Carlo shipping cask calculations within the SCALE system - a modular code system for Standardized Computer Analysis for Licensing Evaluation. The SCALE system was conceived and funded by the US Nuclear Regulatory Commission to satisfy a strong need for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems

  16. Atomistic Monte Carlo simulation of lipid membranes

    Wüstner, Daniel; Sklenar, Heinz

    2014-01-01

    molecule, as assessed by calculation of molecular energies and entropies. We also show transition from a crystalline-like to a fluid DPPC bilayer by the CBC local-move MC method, as indicated by the electron density profile, head group orientation, area per lipid, and whole-lipid displacements. We discuss......Biological membranes are complex assemblies of many different molecules of which analysis demands a variety of experimental and computational approaches. In this article, we explain challenges and advantages of atomistic Monte Carlo (MC) simulation of lipid membranes. We provide an introduction...... into the various move sets that are implemented in current MC methods for efficient conformational sampling of lipids and other molecules. In the second part, we demonstrate for a concrete example, how an atomistic local-move set can be implemented for MC simulations of phospholipid monomers and...

  17. MORSE Monte Carlo radiation transport code system

    This report is an addendum to the MORSE report, ORNL-4972, originally published in 1975. This addendum contains descriptions of several modifications to the MORSE Monte Carlo Code, replacement pages containing corrections, Part II of the report which was previously unpublished, and a new Table of Contents. The modifications include a Klein Nishina estimator for gamma rays. Use of such an estimator required changing the cross section routines to process pair production and Compton scattering cross sections directly from ENDF tapes and writing a new version of subroutine RELCOL. Another modification is the use of free form input for the SAMBO analysis data. This required changing subroutines SCORIN and adding new subroutine RFRE. References are updated, and errors in the original report have been corrected

  18. Monte Carlo and detector simulation in OOP

    Object-Oriented Programming techniques are explored with an eye towards applications in High Energy Physics codes. Two prototype examples are given: MCOOP (a particle Monte Carlo generator) and GISMO (a detector simulation/analysis package). The OOP programmer does no explicit or detailed memory management nor other bookkeeping chores; hence, the writing, modification, and extension of the code is considerably simplified. Inheritance can be used to simplify the class definitions as well as the instance variables and action methods of each class; thus the work required to add new classes, parameters, or new methods is minimal. The software industry is moving rapidly to OOP since it has been proven to improve programmer productivity, and promises even more for the future by providing truly reusable software. The High Energy Physics community clearly needs to follow this trend

  19. Response decomposition with Monte Carlo correlated coupling

    Particle histories that contribute to a detector response are categorized according to whether they are fully confined inside a source-detector enclosure or cross and recross the same enclosure. The contribution from the confined histories is expressed using a forward problem with the external boundary condition on the source-detector enclosure. The contribution from the crossing and recrossing histories is expressed as the surface integral at the same enclosure of the product of the directional cosine and the fluxes in the foregoing forward problem and the adjoint problem for the whole spatial domain. The former contribution can be calculated by a standard forward Monte Carlo. The latter contribution can be calculated by correlated coupling of forward and adjoint histories independently of the former contribution. We briefly describe the computational method and discuss its application to perturbation analysis for localized material changes. (orig.)

  20. An Overview of the Monte Carlo Application ToolKit (MCATK)

    MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.

  1. An Overview of the Monte Carlo Application ToolKit (MCATK)

    Trahan, Travis John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-01-07

    MCATK is a C++ component-based Monte Carlo neutron-gamma transport software library designed to build specialized applications and designed to provide new functionality in existing general-purpose Monte Carlo codes like MCNP; it was developed with Agile software engineering methodologies under the motivation to reduce costs. The characteristics of MCATK can be summarized as follows: MCATK physics – continuous energy neutron-gamma transport with multi-temperature treatment, static eigenvalue (k and α) algorithms, time-dependent algorithm, fission chain algorithms; MCATK geometry – mesh geometries, solid body geometries. MCATK provides verified, unit-tested Monte Carlo components, flexibility in Monte Carlo applications development, and numerous tools such as geometry and cross section plotters. Recent work has involved deterministic and Monte Carlo analysis of stochastic systems. Static and dynamic analysis is discussed, and the results of a dynamic test problem are given.

  2. MCNP-POLIMI v1.0, Monte Carlo N-Particle Transport Code System To Simulate Time-Analysis Quantities

    1 - Description of program or function: MCNP is a general-purpose, continuous-energy, generalized geometry, time-dependent, coupled neutron-photon-electron Monte Carlo transport code system. Based on the Los Alamos National Laboratory code MCNP4C (formerly distributed as CCC-700), MCNP-PoliMi was developed to simulate time-analysis quantities. In particular, the code includes the correlation between neutron interaction and the corresponding photon production. Conversely to the technique adopted by standard MCNP, MCNP PoliMi samples secondary photons according to the neutron collision type. A post-processing code, i.e. the Matlab script 'postmain', is included and can be tailored to model specific detector characteristics. These features make MCNP-PoliMi a versatile tool to simulate particle interactions and detection processes. 2 - Methods: MCNP treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and some special fourth-degree surfaces. For neutrons, all reactions in a particular cross-section evaluation are accounted for. Both free gas and S(alpha, beta) thermal treatments are used. Criticality sources as well as fixed and surface sources are available. For photons, the code takes account of incoherent and coherent scattering with and without electron binding effects, the possibility of fluorescent emission following photoelectric absorption, and absorption in pair production with local emission of annihilation radiation. A very general source and tally structure is available. The tallies have extensive statistical analysis of convergence. Rapid convergence is enabled by a wide variety of variance reduction methods. Energy ranges are 0-60 MeV for neutrons (data generally only available up to 20 MeV) and 1 keV - 1 GeV for photons and electrons. The MCNP-PoliMi code was developed to simulate each neutron-nucleus interaction as closely as possible. In particular, neutron interaction and

  3. Quantitative Phylogenomics of Within-Species Mitogenome Variation: Monte Carlo and Non-Parametric Analysis of Phylogeographic Structure among Discrete Transatlantic Breeding Areas of Harp Seals (Pagophilus groenlandicus).

    Carr, Steven M; Duggan, Ana T; Stenson, Garry B; Marshall, H Dawn

    2015-01-01

    Phylogenomic analysis of highly-resolved intraspecific phylogenies obtained from complete mitochondrial DNA genomes has had great success in clarifying relationships within and among human populations, but has found limited application in other wild species. Analytical challenges include assessment of random versus non-random phylogeographic distributions, and quantification of differences in tree topologies among populations. Harp Seals (Pagophilus groenlandicus Erxleben, 1777) have a biogeographic distribution based on four discrete trans-Atlantic breeding and whelping populations located on "fast ice" attached to land in the White Sea, Greenland Sea, the Labrador ice Front, and Southern Gulf of St Lawrence. This East to West distribution provides a set of a priori phylogeographic hypotheses. Outstanding biogeographic questions include the degree of genetic distinctiveness among these populations, in particular between the Greenland Sea and White Sea grounds. We obtained complete coding-region DNA sequences (15,825 bp) for 53 seals. Each seal has a unique mtDNA genome sequence, which differ by 6 ~ 107 substitutions. Six major clades / groups are detectable by parsimony, neighbor-joining, and Bayesian methods, all of which are found in breeding populations on either side of the Atlantic. The species coalescent is at 180 KYA; the most recent clade, which accounts for 66% of the diversity, reflects an expansion during the mid-Wisconsinan glaciation 40~60 KYA. FST is significant only between the White Sea and Greenland Sea or Ice Front populations. Hierarchal AMOVA of 2-, 3-, or 4-island models identifies small but significant ΦSC among populations within groups, but not among groups. A novel Monte-Carlo simulation indicates that the observed distribution of individuals within breeding populations over the phylogenetic tree requires significantly fewer dispersal events than random expectation, consistent with island or a priori East to West 2- or 3-stepping

  4. Monte Carlo Capabilities of the SCALE Code System

    Rearden, B. T.; Petrie, L. M.; Peplow, D. E.; Bekar, K. B.; Wiarda, D.; Celik, C.; Perfetti, C. M.; Ibrahim, A. M.; Hart, S. W. D.; Dunn, M. E.

    2014-06-01

    SCALE is a widely used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a "plug-and-play" framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport as well as activation, depletion, and decay calculations. SCALE's graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2, to be released in 2014, will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. An overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.

  5. JCOGIN. A parallel programming infrastructure for Monte Carlo particle transport

    The advantages of the Monte Carlo method for reactor analysis are well known, but the full-core reactor analysis challenges the computational time and computer memory. Meanwhile, the exponential growth of computer power in the last 10 years is now creating a great opportunity for large scale parallel computing on the Monte Carlo full-core reactor analysis. In this paper, a parallel programming infrastructure is introduced for Monte Carlo particle transport, named JCOGIN, which aims at accelerating the development of Monte Carlo codes for the large scale parallelism simulations of the full-core reactor. Now, JCOGIN implements the hybrid parallelism of the spatial decomposition and the traditional particle parallelism on MPI and OpenMP. Finally, JMCT code is developed on JCOGIN, which reaches the parallel efficiency of 70% on 20480 cores for fixed source problem. By the hybrid parallelism, the full-core pin-by-pin simulation of the Dayawan reactor was implemented, with the number of the cells up to 10 million and the tallies of the fluxes utilizing over 40GB of memory. (author)

  6. Monte Carlo simulation analysis of integral data measured in the SCK-CEN/ENEA experimental campaign on the TAPIRO fast reactor. Experimental and calculated data comparison

    Burgio, N., E-mail: nunzio.burgio@enea.it [ENEA (Italian National Agency for New Technologies, Energy and Sustainable Economic Development), C.R. Casaccia Via Anguillarese 301, 00123 Rome (Italy); Cretara, L., E-mail: luca.cretara@uniroma1.it [DIAEE – Sapienza University of Rome, Corso Vittorio Emanuele II 244, 00186 Rome (Italy); Frullini, M., E-mail: massimo.frullini@uniroma1.it [DIAEE – Sapienza University of Rome, Corso Vittorio Emanuele II 244, 00186 Rome (Italy); Gandini, A., E-mail: augusto.gandini@uniroma1.it [DIAEE – Sapienza University of Rome, Corso Vittorio Emanuele II 244, 00186 Rome (Italy); Peluso, V., E-mail: vincenzogiuseppe.peluso@enea.it [ENEA (Italian National Agency for New Technologies, Energy and Sustainable Economic Development), Via Martiri di monte Sole 4, 40129 Bologna (Italy); Santagata, A., E-mail: alfonso.santagata@enea.it [ENEA (Italian National Agency for New Technologies, Energy and Sustainable Economic Development), C.R. Casaccia, Via Anguillarese 301, 00123 Rome (Italy)

    2014-07-01

    Highlights: • We develop a MCNPX model of the TAPIRO fast research reactor. • The model has been tested against the result of a late experimental champaign finding on overall agreement. • The source of uncertainties in the nuclear data and in the model assumptions has been discussed. • The model is sufficiently accurate to design irradiation experiment in support to R and D activities on LFR and ADS systems. - Abstract: After Fukushima events, the Italian nuclear program has been redefined leaving space only to activities related to Generation IV nuclear systems. Accordingly with this renewed national scenario, TAPIRO fast reactor facility is gaining a relatively major strategic role. A program is in fact being proposed to host in TAPIRO benchmark experimental activities related to the development of Lead fast reactor and Accelerator Driven Systems. A first step of this program would consist on the validation of neutronic codes, cross section data and reactor models to be adopted for its analysis. Along this line in this work the results of a simulation study has been made relevant to the measurements performed in the SCK-CEN/ENEA experimental campaign carried out in the 1980–1986 period. The calculations have been made using the Monte Carlo MCNPX 2.7.0 Code. In this article the main results are presented and discussed, with particular emphasis on the uncertainties, relevant both to nuclear data and the model layout. The results of this simulation study indicate in particular that TAPIRO's MCNPX model is adequate for the optimization of set-ups of perspective neutron irradiation experiments, this allowing cuts in costs and development time.

  7. Monte Carlo simulation analysis of integral data measured in the SCK-CEN/ENEA experimental campaign on the TAPIRO fast reactor. Experimental and calculated data comparison

    Highlights: • We develop a MCNPX model of the TAPIRO fast research reactor. • The model has been tested against the result of a late experimental champaign finding on overall agreement. • The source of uncertainties in the nuclear data and in the model assumptions has been discussed. • The model is sufficiently accurate to design irradiation experiment in support to R and D activities on LFR and ADS systems. - Abstract: After Fukushima events, the Italian nuclear program has been redefined leaving space only to activities related to Generation IV nuclear systems. Accordingly with this renewed national scenario, TAPIRO fast reactor facility is gaining a relatively major strategic role. A program is in fact being proposed to host in TAPIRO benchmark experimental activities related to the development of Lead fast reactor and Accelerator Driven Systems. A first step of this program would consist on the validation of neutronic codes, cross section data and reactor models to be adopted for its analysis. Along this line in this work the results of a simulation study has been made relevant to the measurements performed in the SCK-CEN/ENEA experimental campaign carried out in the 1980–1986 period. The calculations have been made using the Monte Carlo MCNPX 2.7.0 Code. In this article the main results are presented and discussed, with particular emphasis on the uncertainties, relevant both to nuclear data and the model layout. The results of this simulation study indicate in particular that TAPIRO's MCNPX model is adequate for the optimization of set-ups of perspective neutron irradiation experiments, this allowing cuts in costs and development time

  8. Monte Carlo model for analysis of thermal runaway electrons in streamer tips in transient luminous events and streamer zones of lightning leaders

    Moss, Gregory D.; Pasko, Victor P.; Liu, Ningyu; Veronis, Georgios

    2006-02-01

    Streamers are thin filamentary plasmas that can initiate spark discharges in relatively short (several centimeters) gaps at near ground pressures and are also known to act as the building blocks of streamer zones of lightning leaders. These streamers at ground pressure, after 1/N scaling with atmospheric air density N, appear to be fully analogous to those documented using telescopic imagers in transient luminous events (TLEs) termed sprites, which occur in the altitude range 40-90 km in the Earth's atmosphere above thunderstorms. It is also believed that the filamentary plasma structures observed in some other types of TLEs, which emanate from the tops of thunderclouds and are termed blue jets and gigantic jets, are directly linked to the processes in streamer zones of lightning leaders. Acceleration, expansion, and branching of streamers are commonly observed for a wide range of applied electric fields. Recent analysis of photoionization effects on the propagation of streamers indicates that very high electric field magnitudes ˜10 Ek, where Ek is the conventional breakdown threshold field defined by the equality of the ionization and dissociative attachment coefficients in air, are generated around the tips of streamers at the stage immediately preceding their branching. This paper describes the formulation of a Monte Carlo model, which is capable of describing electron dynamics in air, including the thermal runaway phenomena, under the influence of an external electric field of an arbitrary strength. Monte Carlo modeling results indicate that the ˜10 Ek fields are able to accelerate a fraction of low-energy (several eV) streamer tip electrons to energies of ˜2-8 keV. With total potential differences on the order of tens of MV available in streamer zones of lightning leaders, it is proposed that during a highly transient negative corona flash stage of the development of negative stepped leader, electrons with energies 2-8 keV ejected from streamer tips near

  9. Use of Monte Carlo Methods in brachytherapy

    The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)

  10. Fast quantum Monte Carlo on a GPU

    Lutsyshyn, Y

    2013-01-01

    We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.

  11. Importance iteration in MORSE Monte Carlo calculations

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

  12. Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds

    Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.

    2012-11-01

    A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.

  13. Advanced computers and Monte Carlo

    High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables

  14. Epistasis Test in Meta-Analysis: A Multi-Parameter Markov Chain Monte Carlo Model for Consistency of Evidence

    Lin, Chin; Chu, Chi-Ming; Su, Sui-Lung

    2016-01-01

    Conventional genome-wide association studies (GWAS) have been proven to be a successful strategy for identifying genetic variants associated with complex human traits. However, there is still a large heritability gap between GWAS and transitional family studies. The “missing heritability” has been suggested to be due to lack of studies focused on epistasis, also called gene–gene interactions, because individual trials have often had insufficient sample size. Meta-analysis is a common method f...

  15. A Monte Carlo investigation of two distance measures between statistical populations and their application to cluster analysis

    Rossa, Agnieszka

    1997-01-01

    The paper deals with a simulation study of one of the well-known hierarchical cluster analysis methods applied to classifying the statistical populations. In particular, the problem of clustering the univariate normal populations is studied. Two measures of the distance between statistical populations are considered: the Mahalanobis distance measure which is defined for normally distributed populations under assumption that the covariance matrices are equal and the Kullback-Lei...

  16. Probabilistic Analysis for Capacity Planning in Smart Grid at Residential Low Voltage Level by Monte-Carlo Method

    Du, W

    2011-01-01

    Smart Grid integrates sustainable energy sources and allows mutual communications between electricity distribution operators and electricity consumers. Electricity demand and supply becomes more complex in Smart Grid. It is more challenging for DNOs in grid asset capacity planning, especially at low voltage level. In this research, probabilistic analysis is presented aiming at finding more accurately peak loads then deterministic method, and also it allows estimation of probabilities of overl...

  17. Method of tallying adjoint fluence and calculating kinetics parameters in Monte Carlo codes

    A method of using iterated fission probability to estimate the adjoint fluence during particles simulation, and using it as the weighting function to calculate kinetics parameters βeff and A in Monte Carlo codes, was introduced in this paper. Implements of this method in continuous energy Monte Carlo code MCNP and multi-group Monte Carlo code MCMG are both elaborated. Verification results show that, with regardless additional computing cost, using this method, the adjoint fluence accounted by MCMG matches well with the result computed by ANISN, and the kinetics parameters calculated by MCNP agree very well with benchmarks. This method is proved to be reliable, and the function of calculating kinetics parameters in Monte Carlo codes is carried out effectively, which could be the basement for Monte Carlo codes' utility in the analysis of nuclear reactors' transient behavior. (authors)

  18. Monte Carlo Calculation of Sensitivities to Secondary Angular Distributions. Theory and Validation

    The basic methods for solution of the transport equation that are in practical use today are the discrete ordinates (SN) method, and the Monte Carlo (Monte Carlo) method. While the SN method is typically less computation time consuming, the Monte Carlo method is often preferred for detailed and general description of three-dimensional geometries, and for calculations using cross sections that are point-wise energy dependent. For analysis of experimental and calculated results, sensitivities are needed. Sensitivities to material parameters in general, and to the angular distribution of the secondary (scattered) neutrons in particular, can be calculated by well known SN methods, using the fluxes obtained from solution of the direct and the adjoint transport equations. Algorithms to calculate sensitivities to cross-sections with Monte Carlo methods have been known for quite a time. However, only just recently we have developed a general Monte Carlo algorithm for the calculation of sensitivities to the angular distribution of the secondary neutrons

  19. Guideline for radiation transport simulation with the Monte Carlo method

    Today, the photon and neutron transport calculations with the Monte Carlo method have been progressed with advanced Monte Carlo codes and high-speed computers. Monte Carlo simulation is rather suitable expression than the calculation. Once Monte Carlo codes become more friendly and performance of computer progresses, most of the shielding problems will be solved by using the Monte Carlo codes and high-speed computers. As those codes prepare the standard input data for some problems, the essential techniques for solving the Monte Carlo method and variance reduction techniques of the Monte Carlo calculation might lose the interests to the general Monte Carlo users. In this paper, essential techniques of the Monte Carlo method and the variance reduction techniques, such as importance sampling method, selection of estimator, and biasing technique, are described to afford a better understanding of the Monte Carlo method and Monte Carlo code. (author)

  20. Effects of Uncertainties in Lead Cross Section Data in Monte Carlo Analysis of lead Cooled and reflected Reactors

    This paper describes the problems encountered in the analysis of the Encapsulated Nuclear Heat Source (ENHS) core benchmark and the new cross section libraries developed to overcome these problems. The ENHS is a new lead-bismuth or lead cooled novel reactor concept that is fuelled with metallic alloy of Pu, U and Zr, and is designed to operate for 20 effective full power years without re-fuelling and with very small burn-up reactivity swing. There are numerous uncertainties in the prediction of core parameters of this and other innovative reactor designs, arising from approximations used in the solution of the transport equation, in nuclear data processing and cross section libraries generation. In this paper we analyzed the effects of uncertainties in lead cross sections data from several versions of ENDF, JENDL and JEFF for lead-cooled and reflected computational benchmarks. (author)