WorldWideScience

Sample records for 3d monte carlo

  1. 3D Monte Carlo radiation transfer modelling of photodynamic therapy

    Campbell, C. Louise; Christison, Craig; Brown, C. Tom A.; Wood, Kenneth; Valentine, Ronan M.; Moseley, Harry

    2015-06-01

    The effects of ageing and skin type on Photodynamic Therapy (PDT) for different treatment methods have been theoretically investigated. A multilayered Monte Carlo Radiation Transfer model is presented where both daylight activated PDT and conventional PDT are compared. It was found that light penetrates deeper through older skin with a lighter complexion, which translates into a deeper effective treatment depth. The effect of ageing was found to be larger for darker skin types. The investigation further strengthens the usage of daylight as a potential light source for PDT where effective treatment depths of about 2 mm can be achieved.

  2. Continuous-energy Monte Carlo methods for calculating generalized response sensitivities using TSUNAMI-3D

    This work introduces a new approach for calculating the sensitivity of generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The GEneralized Adjoint Responses in Monte Carlo (GEAR-MC) method has enabled the calculation of high resolution sensitivity coefficients for multiple, generalized neutronic responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here and proof of principle is demonstrated by calculating sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications. (author)

  3. CONTINUOUS-ENERGY MONTE CARLO METHODS FOR CALCULATING GENERALIZED RESPONSE SENSITIVITIES USING TSUNAMI-3D

    Perfetti, Christopher M [ORNL; Rearden, Bradley T [ORNL

    2014-01-01

    This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.

  4. Feasibility and value of fully 3D Monte Carlo reconstruction in single photon emission computed tomography

    The accuracy of Single Photon Emission Computed Tomography (SPECT) images is degraded by physical effects, namely photon attenuation, Compton scatter and spatially varying collimator response. The 3D nature of these effects is usually neglected by the methods used to correct for these effects. To deal with the 3D nature of the problem, a 3D projector modeling the spread of photons in 3D can be used in iterative tomographic reconstruction. The 3D projector can be estimated analytically with some approximations, or using precise Monte Carlo simulations. This latter approach has not been applied to fully 3D reconstruction yet due to impractical storage and computation time. The goal of this paper was to determine the gain to be expected from fully 3D Monte Carlo (F3DMC) modeling of the projector in iterative reconstruction, compared to conventional 2D and 3D reconstruction methods. As a proof-of-concept, two small datasets were considered. The projections of the two phantoms were simulated using the Monte Carlo simulation code GATE, as well as the corresponding projector, by taking into account all physical effects (attenuation, scatter, camera point spread function) affecting the imaging process. F3DMC was implemented by using this 3D projector in a maximum likelihood expectation maximization (MLEM) iterative reconstruction. To assess the value of F3DMC, data were reconstructed using 4 methods: filtered backprojection (FBP), MLEM without attenuation correction (MLEM), MLEM with attenuation correction, Jaszczak scatter correction and 3D correction for depth-dependent spatial resolution using an analytical model (MLEMC) and F3DMC. Our results suggest that F3DMC improves mainly imaging sensitivity and signal-to-noise ratio (SNR): sensitivity is multiplied by about 103 and SNR is increased by 20 to 70% compared to MLEMC. Computation of a more robust projector and application of the method on more realistic datasets are currently under investigation. (authors)

  5. Combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation

    Saleur, H.; Derrida, B.

    1985-07-01

    In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents wich confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations.

  6. Benchmark for a 3D Monte Carlo boiling water reactor fluence computational package - MF3D

    A detailed three dimensional model of a quadrant of an operating BWR has been developed using MCNP to calculate flux spectrum and fluence levels at various locations in the reactor system. The calculational package, MF3D, was benchmarked against test data obtained over a complete fuel cycle of the host BWR. The test package included activation wires sensitive in both the fast and thermal ranges. Comparisons between the calculational results and test data are good to within ten percent, making the MF3D package an accurate tool for neutron and gamma fluence computation in BWR pressure vessel internals. (orig.)

  7. MCMG: a 3-D multigroup P3 Monte Carlo code and its benchmarks

    In this paper a 3-D Monte Carlo multigroup neutron transport code MCMG has been developed from a coupled neutron and photon transport Monte Carlo code MCNP. The continuous-energy cross section library of the MCNP code is replaced by the multigroup cross section data generated by the transport lattice code, such as the WIMS code. It maintains the strong abilities of MCNP for geometry treatment, counting, variance reduction techniques and plotting. The multigroup neutron scattering cross sections adopt the Pn (n ≤ 3) approximation. The test results are in good agreement with the results of other methods and experiments. The number of energy groups can be varied from few groups to multigroup, and either macroscopic or microscopic cross section can be used. (author)

  8. Implementation of 3D Lattice Monte Carlo Simulation on a Cluster of Symmetric Multiprocessors

    雷咏梅; 蒋英; 等

    2002-01-01

    This paper presents a new approach to parallelize 3D lattice Monte Carlo algorithms used in the numerical simulation of polymer on ZiQiang 2000-a cluster of symmetric multiprocessors(SMPs).The combined load for cell and energy calculations over the time step is balanced together to form a single spatial decomposition.Basic aspects and strategies of running Monte Carlo calculations on parallel computers are studied.Different steps involved in porting the software on a parallel architecture based on ZiQiang 2000 running under Linux and MPI are described briefly.It is found that parallelization becomes more advantageous when either the lattice is very large or the model contains many cells and chains.

  9. Adaptive Multi-GPU Exchange Monte Carlo for the 3D Random Field Ising Model

    Navarro, C A; Deng, Youjin

    2015-01-01

    The study of disordered spin systems through Monte Carlo simulations has proven to be a hard task due to the adverse energy landscape present at the low temperature regime, making it difficult for the simulation to escape from a local minimum. Replica based algorithms such as the Exchange Monte Carlo (also known as parallel tempering) are effective at overcoming this problem, reaching equilibrium on disordered spin systems such as the Spin Glass or Random Field models, by exchanging information between replicas of neighbor temperatures. In this work we present a multi-GPU Exchange Monte Carlo method designed for the simulation of the 3D Random Field Model. The implementation is based on a two-level parallelization scheme that allows the method to scale its performance in the presence of faster and GPUs as well as multiple GPUs. In addition, we modified the original algorithm by adapting the set of temperatures according to the exchange rate observed from short trial runs, leading to an increased exchange rate...

  10. OptogenSIM: a 3D Monte Carlo simulation platform for light delivery design in optogenetics.

    Liu, Yuming; Jacques, Steven L; Azimipour, Mehdi; Rogers, Jeremy D; Pashaie, Ramin; Eliceiri, Kevin W

    2015-12-01

    Optimizing light delivery for optogenetics is critical in order to accurately stimulate the neurons of interest while reducing nonspecific effects such as tissue heating or photodamage. Light distribution is typically predicted using the assumption of tissue homogeneity, which oversimplifies light transport in heterogeneous brain. Here, we present an open-source 3D simulation platform, OptogenSIM, which eliminates this assumption. This platform integrates a voxel-based 3D Monte Carlo model, generic optical property models of brain tissues, and a well-defined 3D mouse brain tissue atlas. The application of this platform in brain data models demonstrates that brain heterogeneity has moderate to significant impact depending on application conditions. Estimated light density contours can show the region of any specified power density in the 3D brain space and thus can help optimize the light delivery settings, such as the optical fiber position, fiber diameter, fiber numerical aperture, light wavelength and power. OptogenSIM is freely available and can be easily adapted to incorporate additional brain atlases. PMID:26713200

  11. A highly heterogeneous 3D PWR core benchmark: deterministic and Monte Carlo method comparison

    Physical analyses of the LWR potential performances with regards to the fuel utilization require an important part of the work dedicated to the validation of the deterministic models used for theses analyses. Advances in both codes and computer technology give the opportunity to perform the validation of these models on complex 3D core configurations closed to the physical situations encountered (both steady-state and transient configurations). In this paper, we used the Monte Carlo Transport code TRIPOLI-4 to describe a whole 3D large-scale and highly-heterogeneous LWR core. The aim of this study is to validate the deterministic CRONOS2 code to Monte Carlo code TRIPOLI-4 in a relevant PWR core configuration. As a consequence, a 3D pin by pin model with a consistent number of volumes (4.3 millions) and media (around 23.000) is established to precisely characterize the core at equilibrium cycle, namely using a refined burn-up and moderator density maps. The configuration selected for this analysis is a very heterogeneous PWR high conversion core with fissile (MOX fuel) and fertile zones (depleted uranium). Furthermore, a tight pitch lattice is selected (to increase conversion of 238U in 239Pu) that leads to harder neutron spectrum compared to standard PWR assembly. This benchmark shows 2 main points. First, independent replicas are an appropriate method to achieve a fare variance estimation when dominance ratio is near 1. Secondly, the diffusion operator with 2 energy groups gives satisfactory results compared to TRIPOLI-4 even with a highly heterogeneous neutron flux map and an harder spectrum

  12. Monte Carlo methods for direct calculation of 3D dose distributions for photon fields in radiotherapy

    Even with state of the art treatment planning systems the photon dose calculation can be erroneous under certain circumstances. In these cases Monte Carlo methods promise a higher accuracy. We have used the photon transport code CHILD of the GSF-Forschungszentrum, which was developed to calculate dose in diagnostic radiation protection matters. The code was refined for application in radiotherapy for high energy photon irradiation and should serve for dose verification in individual cases. The irradiation phantom can be entered as any desired 3D matrix or be generated automatically from an individual CT database. The particle transport takes into account pair production, photo, and Compton effect with certain approximations. Efficiency is increased by the method of 'fractional photons'. The generated secondary electrons are followed by the unscattered continuous-slowing-down-approximation (CSDA). The developed Monte Carlo code Monaco Matrix was tested with simple homogeneous and heterogeneous phantoms through comparisons with simulations of the well known but slower EGS4 code. The use of a point source with a direction independent energy spectrum as simplest model of the radiation field from the accelerator head is shown to be sufficient for simulation of actual accelerator depth dose curves. Good agreement (<2%) was found for depth dose curves in water and in bone. With complex test phantoms and comparisons with EGS4 calculated dose profiles some drawbacks in the code were found. Thus, the implementation of the electron multiple-scattering should lead us to step by step improvement of the algorithm. (orig.)

  13. IMPROVEMENT OF 3D MONTE CARLO LOCALIZATION USING A DEPTH CAMERA AND TERRESTRIAL LASER SCANNER

    S. Kanai

    2015-05-01

    Full Text Available Effective and accurate localization method in three-dimensional indoor environments is a key requirement for indoor navigation and lifelong robotic assistance. So far, Monte Carlo Localization (MCL has given one of the promising solutions for the indoor localization methods. Previous work of MCL has been mostly limited to 2D motion estimation in a planar map, and a few 3D MCL approaches have been recently proposed. However, their localization accuracy and efficiency still remain at an unsatisfactory level (a few hundreds millimetre error at up to a few FPS or is not fully verified with the precise ground truth. Therefore, the purpose of this study is to improve an accuracy and efficiency of 6DOF motion estimation in 3D MCL for indoor localization. Firstly, a terrestrial laser scanner is used for creating a precise 3D mesh model as an environment map, and a professional-level depth camera is installed as an outer sensor. GPU scene simulation is also introduced to upgrade the speed of prediction phase in MCL. Moreover, for further improvement, GPGPU programming is implemented to realize further speed up of the likelihood estimation phase, and anisotropic particle propagation is introduced into MCL based on the observations from an inertia sensor. Improvements in the localization accuracy and efficiency are verified by the comparison with a previous MCL method. As a result, it was confirmed that GPGPU-based algorithm was effective in increasing the computational efficiency to 10-50 FPS when the number of particles remain below a few hundreds. On the other hand, inertia sensor-based algorithm reduced the localization error to a median of 47mm even with less number of particles. The results showed that our proposed 3D MCL method outperforms the previous one in accuracy and efficiency.

  14. ORPHEE research reactor: 3D core depletion calculation using Monte-Carlo code TRIPOLI-4®

    Damian, F.; Brun, E.

    2014-06-01

    ORPHEE is a research reactor located at CEA Saclay. It aims at producing neutron beams for experiments. This is a pool-type reactor (heavy water), and the core is cooled by light water. Its thermal power is 14 MW. ORPHEE core is 90 cm height and has a cross section of 27x27 cm2. It is loaded with eight fuel assemblies characterized by a various number of fuel plates. The fuel plate is composed of aluminium and High Enriched Uranium (HEU). It is a once through core with a fuel cycle length of approximately 100 Equivalent Full Power Days (EFPD) and with a maximum burnup of 40%. Various analyses under progress at CEA concern the determination of the core neutronic parameters during irradiation. Taking into consideration the geometrical complexity of the core and the quasi absence of thermal feedback for nominal operation, the 3D core depletion calculations are performed using the Monte-Carlo code TRIPOLI-4® [1,2,3]. A preliminary validation of the depletion calculation was performed on a 2D core configuration by comparison with the deterministic transport code APOLLO2 [4]. The analysis showed the reliability of TRIPOLI-4® to calculate a complex core configuration using a large number of depleting regions with a high level of confidence.

  15. Feasibility and value of fully 3D Monte Carlo reconstruction in single-photon emission computed tomography

    The accuracy of Single-Photon Emission Computed Tomography images is degraded by physical effects, namely photon attenuation, Compton scatter and spatially varying collimator response. The 3D nature of these effects is usually neglected by the methods used to correct for these effects. To deal with the 3D nature of the problem, a 3D projector modeling the spread of photons in 3D can be used in iterative tomographic reconstruction. The 3D projector can be estimated analytically with some approximations, or using precise Monte Carlo simulations. This latter approach has not been applied to fully 3D reconstruction yet due to impractical storage and computation time. The goal of this paper was to determine the gain to be expected from fully 3D Monte Carlo (F3DMC) modeling of the projector in iterative reconstruction, compared to conventional 2D and 3D reconstruction methods. As a proof-of-concept, two small datasets were considered. The projections of the two phantoms were simulated using the Monte Carlo simulation code GATE, as well as the corresponding projector, by taking into account all physical effects (attenuation, scatter, camera point spread function) affecting the imaging process. F3DMC was implemented by using this 3D projector in a maximum likelihood expectation maximization (MLEM) iterative reconstruction. To assess the value of F3DMC, data were reconstructed using four methods: filtered backprojection, MLEM without attenuation correction (MLEM), MLEM with attenuation correction, Jaszczak scatter correction and 3D correction for depth-dependent spatial resolution using an analytical model (MLEMC) and F3DMC. Our results suggest that F3DMC improves mainly imaging sensitivity and signal-to-noise ratio (SNR): sensitivity is multiplied by about 103 and SNR is increased by 20-70% compared to MLEMC. Computation of a more robust projector and application of the method on more realistic datasets are currently under investigation

  16. A combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation

    Saleur, H.; Derrida, B.

    1985-01-01

    In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents which confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations.

  17. A combination of Monte Carlo and transfer matrix methods to study 2D and 3D percolation

    In this paper we develop a method which combines the transfer matrix and the Monte Carlo methods to study the problem of site percolation in 2 and 3 dimensions. We use this method to calculate the properties of strips (2D) and bars (3D). Using a finite size scaling analysis, we obtain estimates of the threshold and of the exponents wich confirm values already known. We discuss the advantages and the limitations of our method by comparing it with usual Monte Carlo calculations

  18. Conceptual detector development and Monte Carlo simulation of a novel 3D breast computed tomography system

    Ziegle, Jens; Müller, Bernhard H.; Neumann, Bernd; Hoeschen, Christoph

    2016-03-01

    A new 3D breast computed tomography (CT) system is under development enabling imaging of microcalcifications in a fully uncompressed breast including posterior chest wall tissue. The system setup uses a steered electron beam impinging on small tungsten targets surrounding the breast to emit X-rays. A realization of the corresponding detector concept is presented in this work and it is modeled through Monte Carlo simulations in order to quantify first characteristics of transmission and secondary photons. The modeled system comprises a vertical alignment of linear detectors hold by a case that also hosts the breast. Detectors are separated by gaps to allow the passage of X-rays towards the breast volume. The detectors located directly on the opposite side of the gaps detect incident X-rays. Mechanically moving parts in an imaging system increase the duration of image acquisition and thus can cause motion artifacts. So, a major advantage of the presented system design is the combination of the fixed detectors and the fast steering electron beam which enable a greatly reduced scan time. Thereby potential motion artifacts are reduced so that the visualization of small structures such as microcalcifications is improved. The result of the simulation of a single projection shows high attenuation by parts of the detector electronics causing low count levels at the opposing detectors which would require a flat field correction, but it also shows a secondary to transmission ratio of all counted X-rays of less than 1 percent. Additionally, a single slice with details of various sizes was reconstructed using filtered backprojection. The smallest detail which was still visible in the reconstructed image has a size of 0.2mm.

  19. Development of 3d reactor burnup code based on Monte Carlo method and exponential Euler method

    Burnup analysis plays a key role in fuel breeding, transmutation and post-processing in nuclear reactor. Burnup codes based on one-dimensional and two-dimensional transport method have difficulties in meeting the accuracy requirements. A three-dimensional burnup analysis code based on Monte Carlo method and Exponential Euler method has been developed. The coupling code combines advantage of Monte Carlo method in complex geometry neutron transport calculation and FISPACT in fast and precise inventory calculation, meanwhile resonance Self-shielding effect in inventory calculation can also be considered. The IAEA benchmark text problem has been adopted for code validation. Good agreements were shown in the comparison with other participants' results. (authors)

  20. Dose prediction and process optimization in a gamma sterilization facility using 3-D Monte Carlo code

    A model of a gamma sterilizer was built using the ITS/ACCEPT Monte Carlo code and verified through dosimetry. Individual dosimetry measurements in homogeneous material were pooled to represent larger bodies that could be simulated in a reasonable time. With the assumptions and simplifications described, dose predictions were within 2-5% of dosimetry. The model was used to simulate product movement through the sterilizer and to predict information useful for process optimization and facility design

  1. IM3D: A parallel Monte Carlo code for efficient simulations of primary radiation displacements and damage in 3D geometry

    Li, Yong Gang; Yang, Yang; Short, Michael P.; Ding, Ze Jun; Zeng, Zhi; Li, Ju

    2015-12-01

    SRIM-like codes have limitations in describing general 3D geometries, for modeling radiation displacements and damage in nanostructured materials. A universal, computationally efficient and massively parallel 3D Monte Carlo code, IM3D, has been developed with excellent parallel scaling performance. IM3D is based on fast indexing of scattering integrals and the SRIM stopping power database, and allows the user a choice of Constructive Solid Geometry (CSG) or Finite Element Triangle Mesh (FETM) method for constructing 3D shapes and microstructures. For 2D films and multilayers, IM3D perfectly reproduces SRIM results, and can be ∼102 times faster in serial execution and > 104 times faster using parallel computation. For 3D problems, it provides a fast approach for analyzing the spatial distributions of primary displacements and defect generation under ion irradiation. Herein we also provide a detailed discussion of our open-source collision cascade physics engine, revealing the true meaning and limitations of the “Quick Kinchin-Pease” and “Full Cascades” options. The issues of femtosecond to picosecond timescales in defining displacement versus damage, the limitation of the displacements per atom (DPA) unit in quantifying radiation damage (such as inadequacy in quantifying degree of chemical mixing), are discussed.

  2. 3-D Monte Carlo neutron-photon transport code JMCT and its algorithms

    JMCT Monte Carlo neutron and photon transport code has been developed which is based on the JCOGIN toolbox. JCOGIN includes the geometry operation, tally, the domain decomposition and the parallel computation about particle (MPI) and spatial domain (OpenMP) etc. The viewdata of CAD is equipped in JMCT preprocessor. The full-core pin-mode, which is from Chinese Qinshan-II nuclear power station, is design and simulated by JMCT. The detail pin-power distribution and keff results are shown in this paper. (author)

  3. COLLI-PTB, Neutron Fluence Spectra for 3-D Collimator System by Monte-Carlo

    1 - Description of program or function: For optimizing collimator systems (shieldings) for fast neutrons with energies between 10 KeV and 20 MeV. Only elastic and inelastic neutron scattering processes are involved. Isotropic angular distribution for inelastic scattering in the center of mass system is assumed. 2 - Method of solution: The Monte Carlo method with importance sampling technique, splitting and Russian Roulette is used. The neutron attenuation and scattering kinematics is taken into account. 3 - Restrictions on the complexity of the problem: Energy range from 10 KeV to 20 MeV. For the output spectra any bin width is possible. The output spectra are confined to 40 equidistant channels

  4. Full 3D Monte Carlo simulation of pit-type defect evolution during extreme ultraviolet lithography multilayer deposition

    To model key aspects of surface morphology evolution and to overcome one of the main barriers to the implementation of extreme ultraviolet lithography in semiconductor processing, the 3D Monte Carlo simulation of ion-beam deposition on pit-type defects was performed. Typical pit defects have depths in the 5–20 nm range and are about 10 times that wide. The aspect ratio of a defect cross section defined as depth divided by the full width at half maximum was used to measure the defect profile (decoration) as a function of film thickness. Previous attempts to model this system used 2D level set methods; 3D calculations using these methods were found to be too computationally intensive. In an effort to model the system in 3D the simulation of this study used the Solid-on-Solid aggregation model to deposit particles onto initial substrate defects. Surface diffusion was then simulated to relax the defect. Aspect ratio decay data was collected from the simulated defects and analyzed. The model was validated for defect evolution by comparing simulations to the experimental scanning transmission electron microscopy data. The statistics of effective activation energy were considered to show that observed defects have important geometric differences which define a unique aspect ratio decay path. Close fitting to the observed case was utilized to validate Monte Carlo physical models of thin film growth for use in predicting the multilayer profile of pit-type defects. - Highlights: • Model pit-type defects in multilayers using Monte Carlo methods. • Simulation substrates derived from Atomic Force Microscopy (AFM) scan defects • AFM scanned defect simulations return close fitting to the physical observations • Activation energy statistics on the surface show unique aspect ratio decay paths. • A test using of the fitting case applied to a different situation works accurately

  5. Discretized mesh tools and related treatment for hybrid transport application with 3d discrete ordinates and Monte Carlo

    Hybrid methods of neutron transport have increased greatly in use, for example, in applications of using both Monte Carlo and deterministic transport methods to calculate quantities of interest, such as the flux and eigenvalue in a nuclear reactor. Many 3d parallel Sn codes apply a Cartesian mesh, and thus for nuclear reactors the representation of curved fuels (cylinder, sphere, etc.) are impacted in the representation of proper fuel inventory, resulting in both a deviation of mass and exact geometry in the computer model representation. In addition, we discuss auto-conversion techniques with our 3d Cartesian mesh generation tools to allow for full generation of MCNP5 inputs (Cartesian mesh and Multigroup XS) from a basis PENTRAN Sn model. For a PWR assembly eigenvalue problem, we explore the errors associated with this Cartesian discrete mesh representation, and perform an analysis to calculate a slope parameter that relates the pcm to the percent areal/volumetric deviation (areal → 2d problems, volumetric → 3d problems). This paper analysis demonstrates a linear relationship between pcm change and areal/volumetric deviation using Multigroup MCNP on a PWR assembly compared to a reference exact combinatorial MCNP geometry calculation. For the same MCNP multigroup problems, we also characterize this linear relationship in discrete ordinates (3d PENTRAN). Finally, for 3D Sn models, we show an application of corner fractioning, a volume-weighted recovery of underrepresented target fuel mass that reduced pcm error to < 100, compared to reference Monte Carlo, in the application to a PWR assembly. (author)

  6. OMEGA, Subcritical and Critical Neutron Transport in General 3-D Geometry by Monte-Carlo

    1 - Description of problem or function: OMEGA is a Monte Carlo code for the solution of the stationary neutron transport equation with k-eff as the Eigenvalue. A three-dimensional geometry is permitted consisting of a very general arrangement of three basic shapes (columns with circular, rectangular, or hexagonal cross section with a finite height and different material layers along their axes). The main restriction is that all the basic shapes must have parallel axes. Most real arrangements of fissile material inside and outside a reactor (e.g., in a fuel storage or transport container) can be described without approximation. The main field of application is the estimation of criticality safety. Many years of experience and comparison with reference cases have shown that the code together with the built-in cross section libraries gives reliable results. The following results can be calculated: - the effective multiplication factor k-eff; - the flux distribution; - reaction rates; - spatially and energetically condensed cross sections for later use in a subsequent OMEGA run. A running job may be interrupted and continued later, possibly with an increased number of batches for an improved statistical accuracy. The geometry as well as the k-eff results may be visualized. The use of the code is demonstrated by many illustrating examples. 2 - Method of solution: The Monte Carlo method is used with neutrons starting from an initial source distribution. The histories of a generation (or batch) of neutrons are followed from collision to collision until the histories are terminated by capture, fission, or leakage. For the solution of the Eigenvalue problem, the starting positions of the neutrons for a given generation are determined by the fission points of the preceding generation. The summation of the results starts only after some initial generations when the spatial part of the fission source has converged. At present the code uses the BNAB-78 subgroup library of the

  7. The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data

    Ilic, Radovan D [Laboratory of Physics (010), Vinca Institute of Nuclear Sciences, PO Box 522, 11001 Belgrade (Serbia and Montenegro); Spasic-Jokic, Vesna [Laboratory of Physics (010), Vinca Institute of Nuclear Sciences, PO Box 522, 11001 Belgrade (Serbia and Montenegro); Belicev, Petar [Laboratory of Physics (010), Vinca Institute of Nuclear Sciences, PO Box 522, 11001 Belgrade (Serbia and Montenegro); Dragovic, Milos [Center for Nuclear Medicine MEDICA NUCLEARE, Bulevar Despota Stefana 69, 11000 Belgrade (Serbia and Montenegro)

    2005-03-07

    This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour.

  8. The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data

    Ilic, Radovan D.; Spasic-Jokic, Vesna; Belicev, Petar; Dragovic, Milos

    2005-03-01

    This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour.

  9. The Monte Carlo SRNA-VOX code for 3D proton dose distribution in voxelized geometry using CT data

    This paper describes the application of the SRNA Monte Carlo package for proton transport simulations in complex geometry and different material compositions. The SRNA package was developed for 3D dose distribution calculation in proton therapy and dosimetry and it was based on the theory of multiple scattering. The decay of proton induced compound nuclei was simulated by the Russian MSDM model and our own using ICRU 63 data. The developed package consists of two codes: the SRNA-2KG, which simulates proton transport in combinatorial geometry and the SRNA-VOX, which uses the voxelized geometry using the CT data and conversion of the Hounsfield's data to tissue elemental composition. Transition probabilities for both codes are prepared by the SRNADAT code. The simulation of the proton beam characterization by multi-layer Faraday cup, spatial distribution of positron emitters obtained by the SRNA-2KG code and intercomparison of computational codes in radiation dosimetry, indicate immediate application of the Monte Carlo techniques in clinical practice. In this paper, we briefly present the physical model implemented in the SRNA package, the ISTAR proton dose planning software, as well as the results of the numerical experiments with proton beams to obtain 3D dose distribution in the eye and breast tumour

  10. Spectral history model in DYN3D: Verification against coupled Monte-Carlo thermal-hydraulic code BGCore

    Highlights: • Pu-239 based spectral history method was tested on 3D BWR single assembly case. • Burnup of a BWR fuel assembly was performed with the nodal code DYN3D. • Reference solution was obtained by coupled Monte-Carlo thermal-hydraulic code BGCore. • The proposed method accurately reproduces moderator density history effect for BWR test case. - Abstract: This research focuses on the verification of a recently developed methodology accounting for spectral history effects in 3D full core nodal simulations. The traditional deterministic core simulation procedure includes two stages: (1) generation of homogenized macroscopic cross section sets and (2) application of these sets to obtain a full 3D core solution with nodal codes. The standard approach adopts the branch methodology in which the branches represent all expected combinations of operational conditions as a function of burnup (main branch). The main branch is produced for constant, usually averaged, operating conditions (e.g. coolant density). As a result, the spectral history effects that associated with coolant density variation are not taken into account properly. Number of methods to solve this problem (such as micro-depletion and spectral indexes) were developed and implemented in modern nodal codes. Recently, we proposed a new and robust method to account for history effects. The methodology was implemented in DYN3D and involves modification of the few-group cross section sets. The method utilizes the local Pu-239 concentration as an indicator of spectral history. The method was verified for PWR and VVER applications. However, the spectrum variation in BWR core is more pronounced due to the stronger coolant density change. The purpose of the current work is investigating the applicability of the method to BWR analysis. The proposed methodology was verified against recently developed BGCore system, which couples Monte Carlo neutron transport with depletion and thermal-hydraulic solvers and

  11. TIMOC-72, 3-D Time-Dependent Homogeneous or Inhomogeneous Neutron Transport by Monte-Carlo

    1 - Nature of physical problem solved: TIMOC solves the energy and time dependent (or stationary) homogeneous or inhomogeneous neutron transport equation in three-dimensional geometries. The program can treat all commonly used scattering kernels, such as absorption, fission, isotropic and anisotropic elastic scattering, level excitation, the evaporation model, and the energy transfer matrix model, which includes (n,2n) reactions. The exchangeable geometry routines consist at present of (a) periodical multilayer slab, spherical and cylindrical lattices, (b) an elaborate three-dimensional cylindrical geometry which allows all kinds of subdivisions, (c) the very flexible O5R geometry routine which is able to describe any body combinations with surfaces of second order. The program samples the stationary or time-energy-region dependent fluxes as well as the transmission ratios between geometrical regions and the following integral quantities or eigenvalues, the leakage rate, the slowing down density, the production to source ratio, the multiplication factor based on flux and collision estimator, the mean production time, the mean destruction time, time distribution of production and destruction, the fission rates, the energy dependent absorption rates, the energy deposition due to elastic scattering for the different geometrical regions. 2 - Method of solution: TIMOC is a Monte Carlo program and uses several, partially optional variance reducing techniques, such as the method of expected values (weight factor), Russian roulette, the method of fractional generated neutrons, double sampling, semi-systematic sampling and the method of expected leakage probability. Within the neutron lifetime a discrete energy value is given after each collision process. The nuclear data input is however done by group averaged cross sections. The program can generate the neutron fluxes either resulting from an external source or in the form of fundamental mode distributions by a special

  12. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the

  13. A 3D photon superposition/convolution algorithm and its foundation on results of Monte Carlo calculations

    Ulmer, W.; Pyyry, J.; Kaissl, W.

    2005-04-01

    Based on previous publications on a triple Gaussian analytical pencil beam model and on Monte Carlo calculations using Monte Carlo codes GEANT-Fluka, versions 95, 98, 2002, and BEAMnrc/EGSnrc, a three-dimensional (3D) superposition/convolution algorithm for photon beams (6 MV, 18 MV) is presented. Tissue heterogeneity is taken into account by electron density information of CT images. A clinical beam consists of a superposition of divergent pencil beams. A slab-geometry was used as a phantom model to test computed results by measurements. An essential result is the existence of further dose build-up and build-down effects in the domain of density discontinuities. These effects have increasing magnitude for field sizes densities <=0.25 g cm-3, in particular with regard to field sizes considered in stereotaxy. They could be confirmed by measurements (mean standard deviation 2%). A practical impact is the dose distribution at transitions from bone to soft tissue, lung or cavities. This work has partially been presented at WC 2003, Sydney.

  14. Fully 3D tomographic reconstruction by Monte Carlo simulation of the system matrix in preclinical PET with iodine 124

    Immuno-PET imaging can be used to assess the pharmacokinetic in radioimmunotherapy. When using iodine-124, PET quantitative imaging is limited by physics-based degrading factors within the detection system and the object, such as the long positron range in water and the complex spectrum of gamma photons. The objective of this thesis was to develop a fully 3D tomographic reconstruction method (S(MC)2PET) using Monte Carlo simulations for estimating the system matrix, in the context of preclinical imaging with iodine-124. The Monte Carlo simulation platform GATE was used for that respect. Several complexities of system matrices were calculated, with at least a model of the PET system response function. Physics processes in the object was either neglected or taken into account using a precise or a simplified object description. The impact of modelling refinement and statistical variance related to the system matrix elements was evaluated on final reconstructed images. These studies showed that a high level of complexity did not always improve qualitative and quantitative results, owing to the high-variance of the associated system matrices. (author)

  15. Incorporation of electron tunnelling phenomenon into 3D Monte Carlo simulation of electrical percolation in graphite nanoplatelet composites

    The percolation threshold problem in insulating polymers filled with exfoliated conductive graphite nanoplatelets (GNPs) is re-examined in this 3D Monte Carlo simulation study. GNPs are modelled as solid discs wrapped by electrically conductive layers of certain thickness which represent half of the electron tunnelling distance. Two scenarios of 'impenetrable' and 'penetrable' GNPs are implemented in the simulations. The percolation thresholds for both scenarios are plotted versus the electron tunnelling distance for various GNP thicknesses. The assumption of successful dispersion and exfoliation, and the incorporation of the electron tunnelling phenomenon in the impenetrable simulations suggest that the simulated percolation thresholds are lower bounds for any experimental study. Finally, the simulation results are discussed and compared with other experimental studies.

  16. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    Pecchia, M.; D' Auria, F. [San Piero A Grado Nuclear Research Group GRNSPG, Univ. of Pisa, via Diotisalvi, 2, 56122 - Pisa (Italy); Mazzantini, O. [Nucleo-electrica Argentina Societad Anonima NA-SA, Buenos Aires (Argentina)

    2012-07-01

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3D{sup C}/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  17. Development of a 3D program for calculation of multigroup Dancoff factor based on Monte Carlo method in cylindrical geometry

    Highlights: • Code works based on Monte Carlo and escape probability methods. • Sensitivity of Dancoff factor to number of energy groups and type and arrangement of neighbor’s fuels is considered. • Sensitivity of Dancoff factor to control rod’s height is considered. • Dancoff factor high efficiency is achieved versus method sampling neutron flight direction from the fuel surface. • Sensitivity of K to Dancoff factor is considered. - Abstract: Evaluation of multigroup constants in reactor calculations depends on several parameters, the Dancoff factor amid them is used for calculation of the resonance integral as well as flux depression in the resonance region in the heterogeneous systems. This paper focuses on the computer program (MCDAN-3D) developed for calculation of the multigroup black and gray Dancoff factor in three dimensional geometry based on Monte Carlo and escape probability methods. The developed program is capable to calculate the Dancoff factor for an arbitrary arrangement of fuel rods with different cylindrical fuel dimensions and control rods with various lengths inserted in the reactor core. The initiative calculates the black and gray Dancoff factor versus generated neutron flux in cosine and constant shapes in axial fuel direction. The effects of clad and moderator are followed by studying of Dancoff factor’s sensitivity with variation of fuel arrangements and neutron’s energy group for CANDU37 and VVER1000 fuel assemblies. MCDAN-3D outcomes poses excellent agreement with the MCNPX code. The calculated Dancoff factors are then used for cell criticality calculations by the WIMS code

  18. Stratospheric trace gases from SCIAMACHY limb measurements using 3D full spherical Monte Carlo radiative transfer model Tracy-II

    Pukite, Janis [Max- Planck-Institut fuer Chemie, Mainz (Germany); Institute of Atomic Physics and Spectroscopy, University of Latvia (Latvia); Kuehl, Sven; Wagner, Thomas [Max- Planck-Institut fuer Chemie, Mainz (Germany); Deutschmann, Tim; Platt, Ulrich [Institut fuer Umweltphysik, University of Heidelberg (Germany)

    2007-07-01

    A two step method for the retrieval of stratospheric trace gases (NO{sub 2}, BrO, OClO) from SCIAMACHY limb observations in the UV/VIS spectral region is presented: First, DOAS is applied on the spectra, yielding slant column densities (SCDs) of the respective trace gases. Second, the SCDs are converted into vertical concentration profiles applying radiative transfer modeling. The Monte Carlo method benefits from conceptual simplicity and allows realizing the concept of full spherical geometry of the atmosphere and also its 3D properties, which are important for a realistic description of the limb geometry. The implementation of a 3D box air mass factor concept allows accounting for horizontal gradients of trace gases. An important point is the effect of horizontal gradients on the profile inversion. This is of special interest in Polar Regions, where the Sun elevation is typically low and photochemistry can highly vary along the long absorption paths. We investigate the influence of horizontal gradients by applying 3-dimensional radiative transfer modelling.

  19. Investigation of the Power Coefficient of Reactivity of 3D CANDU Reactor through Detailed Monte Carlo Analysis

    The heat is removed by the heavy water coolant completely separated from stationary moderator. Due to the good neutron economy of the CANDU reactor, natural uranium fuel is used without enrichment. Because of the unique core configuration characteristic, there is less resonance absorption of neutron in fuel which leads to a relatively small fuel temperature coefficient (FTC). The value of FTC can even be positive due to the 239Pu buildup during the fuel depletion and also the neutron up-scattering by the oxygen atoms in the fuel. Unlike the pressurized light water reactor, it is well known that CANDU-6 has a positive coolant void reactivity (CVR) and coolant temperature coefficient (CTC). In a traditional reactor analysis, the asymptotic scattering kernel has been used and neglects the thermal motion of nuclides such as U-238. However, it is well accepted that in a scattering reaction, the thermal movement of the target can affect the scattering reaction in the vicinity of scattering resonance and enhance neutron capture by the capture resonance. Some recent works have revealed that the thermal motion of U-238 affects the scattering reaction and that the resulting Doppler broadening of the scattering resonance enhances the FTC of the thermal reactor including PWRs by 10- 15%. In order to observe the impacts of the Doppler broadening of the scattering resonances on the criticality and FTC, a recent investigation was done for a clean and fresh CANDU fuel lattice using Monte Carlo code MCNPX for analysis.. In ref. 3 the so-called DBRC (Doppler Broadened Rejection Correction) method was adopted to consider the thermal movement of U-238. In this study, the safety parameter of CANDU-6 is re-evaluated by using the continuous energy Monte Carlo code SERPENT 2 which uses the DBRC method to simulate the thermal motion of U-238. The analysis is performed for a full 3-D CANDU-6 core and the PCR is evaluated near equilibrium burnup. For a high-fidelity Monte Carlo calculation

  20. The Development of WARP - A Framework for Continuous Energy Monte Carlo Neutron Transport in General 3D Geometries on GPUs

    Bergmann, Ryan

    Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the

  1. 3D imaging using combined neutron-photon fan-beam tomography: A Monte Carlo study.

    Hartman, J; Yazdanpanah, A Pour; Barzilov, A; Regentova, E

    2016-05-01

    The application of combined neutron-photon tomography for 3D imaging is examined using MCNP5 simulations for objects of simple shapes and different materials. Two-dimensional transmission projections were simulated for fan-beam scans using 2.5MeV deuterium-deuterium and 14MeV deuterium-tritium neutron sources, and high-energy X-ray sources, such as 1MeV, 6MeV and 9MeV. Photons enable assessment of electron density and related mass density, neutrons aid in estimating the product of density and material-specific microscopic cross section- the ratio between the two provides the composition, while CT allows shape evaluation. Using a developed imaging technique, objects and their material compositions have been visualized. PMID:26953978

  2. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    Li, Ming; Huang, Xiaobo; Kang, Zhan

    2015-08-01

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials.

  3. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials

  4. Hydrogen adsorption and desorption with 3D silicon nanotube-network and film-network structures: Monte Carlo simulations

    Li, Ming; Kang, Zhan, E-mail: zhankang@dlut.edu.cn [State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian 116024 (China); Huang, Xiaobo [Suzhou Nuclear Power Research Institute, Suzhou 215000 (China)

    2015-08-28

    Hydrogen is clean, sustainable, and renewable, thus is viewed as promising energy carrier. However, its industrial utilization is greatly hampered by the lack of effective hydrogen storage and release method. Carbon nanotubes (CNTs) were viewed as one of the potential hydrogen containers, but it has been proved that pure CNTs cannot attain the desired target capacity of hydrogen storage. In this paper, we present a numerical study on the material-driven and structure-driven hydrogen adsorption of 3D silicon networks and propose a deformation-driven hydrogen desorption approach based on molecular simulations. Two types of 3D nanostructures, silicon nanotube-network (Si-NN) and silicon film-network (Si-FN), are first investigated in terms of hydrogen adsorption and desorption capacity with grand canonical Monte Carlo simulations. It is revealed that the hydrogen storage capacity is determined by the lithium doping ratio and geometrical parameters, and the maximum hydrogen uptake can be achieved by a 3D nanostructure with optimal configuration and doping ratio obtained through design optimization technique. For hydrogen desorption, a mechanical-deformation-driven-hydrogen-release approach is proposed. Compared with temperature/pressure change-induced hydrogen desorption method, the proposed approach is so effective that nearly complete hydrogen desorption can be achieved by Si-FN nanostructures under sufficient compression but without structural failure observed. The approach is also reversible since the mechanical deformation in Si-FN nanostructures can be elastically recovered, which suggests a good reusability. This study may shed light on the mechanism of hydrogen adsorption and desorption and thus provide useful guidance toward engineering design of microstructural hydrogen (or other gas) adsorption materials.

  5. Image quality assessment of LaBr3-based whole-body 3D PET scanners: a Monte Carlo evaluation

    The main thrust for this work is the investigation and design of a whole-body PET scanner based on new lanthanum bromide scintillators. We use Monte Carlo simulations to generate data for a 3D PET scanner based on LaBr3 detectors, and to assess the count-rate capability and the reconstructed image quality of phantoms with hot and cold spheres using contrast and noise parameters. Previously we have shown that LaBr3 has very high light output, excellent energy resolution and fast timing properties which can lead to the design of a time-of-flight (TOF) whole-body PET camera. The data presented here illustrate the performance of LaBr3 without the additional benefit of TOF information, although our intention is to develop a scanner with TOF measurement capability. The only drawbacks of LaBr3 are the lower stopping power and photo-fraction which affect both sensitivity and spatial resolution. However, in 3D PET imaging where energy resolution is very important for reducing scattered coincidences in the reconstructed image, the image quality attained in a non-TOF LaBr3 scanner can potentially equal or surpass that achieved with other high sensitivity scanners. Our results show that there is a gain in NEC arising from the reduced scatter and random fractions in a LaBr3 scanner. The reconstructed image resolution is slightly worse than a high-Z scintillator, but at increased count-rates, reduced pulse pileup leads to an image resolution similar to that of LSO. Image quality simulations predict reduced contrast for small hot spheres compared to an LSO scanner, but improved noise characteristics at similar clinical activity levels

  6. The Monte Carlo atmospheric radiative transfer model McArtim: Introduction and validation of Jacobians and 3D features

    A new Monte Carlo atmospheric radiative transfer model is presented which is designed to support the interpretation of UV/vis/near-IR spectroscopic measurements of scattered Sun light in the atmosphere. The integro differential equation describing the underlying transport process and its formal solution are discussed. A stochastic approach to solve the differential equation, the Monte Carlo method, is deduced and its application to the formal solution is demonstrated. It is shown how model photon trajectories of the resulting ray tracing algorithm are used to estimate functionals of the radiation field such as radiances, actinic fluxes and light path integrals. In addition, Jacobians of the former quantities with respect to optical parameters of the atmosphere are analyzed. Model output quantities are validated against measurements, by self-consistency tests and through inter comparisons with other radiative transfer models.

  7. TART98 a coupled neutron-photon 3-D, combinatorial geometry time dependent Monte Carlo Transport code

    Cullen, D E

    1998-11-22

    TART98 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input preparation, running Monte Carlo calculations, and analysis of output results. TART98 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART98 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART98 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART98 and its data files.

  8. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files

  9. Comparison of 3D and 4D Monte Carlo optimization in robotic tracking stereotactic body radiotherapy of lung cancer

    Chan, Mark K.H. [Tuen Mun Hospital, Department of Clinical Oncology, Hong Kong (S.A.R) (China); Werner, Rene [The University Medical Center Hamburg-Eppendorf, Department of Computational Neuroscience, Hamburg (Germany); Ayadi, Miriam [Leon Berard Cancer Center, Department of Radiation Oncology, Lyon (France); Blanck, Oliver [University Clinic of Schleswig-Holstein, Department of Radiation Oncology, Luebeck (Germany); CyberKnife Center Northern Germany, Guestrow (Germany)

    2014-09-20

    To investigate the adequacy of three-dimensional (3D) Monte Carlo (MC) optimization (3DMCO) and the potential of four-dimensional (4D) dose renormalization (4DMC{sub renorm}) and optimization (4DMCO) for CyberKnife (Accuray Inc., Sunnyvale, CA) radiotherapy planning in lung cancer. For 20 lung tumors, 3DMCO and 4DMCO plans were generated with planning target volume (PTV{sub 5} {sub mm}) = gross tumor volume (GTV) plus 5 mm, assuming 3 mm for tracking errors (PTV{sub 3} {sub mm}) and 2 mm for residual organ deformations. Three fractions of 60 Gy were prescribed to ≥ 95 % of the PTV{sub 5} {sub mm}. Each 3DMCO plan was recalculated by 4D MC dose calculation (4DMC{sub recal}) to assess the dosimetric impact of organ deformations. The 4DMC{sub recal} plans were renormalized (4DMC{sub renorm}) to 95 % dose coverage of the PTV{sub 5} {sub mm} for comparisons with the 4DMCO plans. A 3DMCO plan was considered adequate if the 4DMC{sub recal} plan showed ≥ 95 % of the PTV{sub 3} {sub mm} receiving 60 Gy and doses to other organs at risk (OARs) were below the limits. In seven lesions, 3DMCO was inadequate, providing < 95 % dose coverage to the PTV{sub 3} {sub mm}. Comparison of 4DMC{sub recal} and 3DMCO plans showed that organ deformations resulted in lower OAR doses. Renormalizing the 4DMC{sub recal} plans could produce OAR doses higher than the tolerances in some 4DMC{sub renorm} plans. Dose conformity of the 4DMC{sub renorm} plans was inferior to that of the 3DMCO and 4DMCO plans. The 4DMCO plans did not always achieve OAR dose reductions compared to 3DMCO and 4DMC{sub renorm} plans. This study indicates that 3DMCO with 2 mm margins for organ deformations may be inadequate for Cyberknife-based lung stereotactic body radiotherapy (SBRT). Renormalizing the 4DMC{sub recal} plans could produce degraded dose conformity and increased OAR doses; 4DMCO can resolve this problem. (orig.) [German] Untersucht wurde die Angemessenheit einer dreidimensionalen (3-D) Monte-Carlo

  10. MCMini: Monte Carlo on GPGPU

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-25

    MCMini is a proof of concept that demonstrates the possibility for Monte Carlo neutron transport using OpenCL with a focus on performance. This implementation, written in C, shows that tracing particles and calculating reactions on a 3D mesh can be done in a highly scalable fashion. These results demonstrate a potential path forward for MCNP or other Monte Carlo codes.

  11. Benchmark of Atucha-2 PHWR RELAP5-3D control rod model by Monte Carlo MCNP5 core calculation

    Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and peculiarities require the adoption of advanced Monte Carlo codes for performing realistic neutronic simulations. Therefore core models of Atucha-2 PHWR were developed using MCNP5. In this work a methodology was set up to collect the flux in the hexagonal mesh by which the Atucha-2 core is represented. The scope of this activity is to evaluate the effect of obliquely inserted control rod on neutron flux in order to validate the RELAP5-3DC/NESTLE three dimensional neutron kinetic coupled thermal-hydraulic model, applied by GRNSPG/UNIPI for performing selected transients of Chapter 15 FSAR of Atucha-2. (authors)

  12. 3D Direct Simulation Monte Carlo Modelling of the Inner Gas Coma of Comet 67P/Churyumov-Gerasimenko: A Parameter Study

    Liao, Y.; Su, C. C.; Marschall, R.; Wu, J. S.; Rubin, M.; Lai, I. L.; Ip, W. H.; Keller, H. U.; Knollenberg, J.; Kührt, E.; Skorov, Y. V.; Thomas, N.

    2016-03-01

    Direct Simulation Monte Carlo (DSMC) is a powerful numerical method to study rarefied gas flows such as cometary comae and has been used by several authors over the past decade to study cometary outflow. However, the investigation of the parameter space in simulations can be time consuming since 3D DSMC is computationally highly intensive. For the target of ESA's Rosetta mission, comet 67P/Churyumov-Gerasimenko, we have identified to what extent modification of several parameters influence the 3D flow and gas temperature fields and have attempted to establish the reliability of inferences about the initial conditions from in situ and remote sensing measurements. A large number of DSMC runs have been completed with varying input parameters. In this work, we present the simulation results and conclude on the sensitivity of solutions to certain inputs. It is found that among cases of water outgassing, the surface production rate distribution is the most influential variable to the flow field.

  13. SU-C-201-06: Utility of Quantitative 3D SPECT/CT Imaging in Patient Specific Internal Dosimetry of 153-Samarium with GATE Monte Carlo Package

    Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-T scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning

  14. SU-C-201-06: Utility of Quantitative 3D SPECT/CT Imaging in Patient Specific Internal Dosimetry of 153-Samarium with GATE Monte Carlo Package

    Fallahpoor, M; Abbasi, M [Tehran University of Medical Sciences, Vali-Asr Hospital, Tehran, Tehran (Iran, Islamic Republic of); Sen, A [University of Houston, Houston, TX (United States); Parach, A [Shahid Sadoughi University of Medical Sciences, Yazd, Yazd (Iran, Islamic Republic of); Kalantari, F [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: Patient-specific 3-dimensional (3D) internal dosimetry in targeted radionuclide therapy is essential for efficient treatment. Two major steps to achieve reliable results are: 1) generating quantitative 3D images of radionuclide distribution and attenuation coefficients and 2) using a reliable method for dose calculation based on activity and attenuation map. In this research, internal dosimetry for 153-Samarium (153-Sm) was done by SPECT-CT images coupled GATE Monte Carlo package for internal dosimetry. Methods: A 50 years old woman with bone metastases from breast cancer was prescribed 153-Sm treatment (Gamma: 103keV and beta: 0.81MeV). A SPECT/CT scan was performed with the Siemens Simbia-T scanner. SPECT and CT images were registered using default registration software. SPECT quantification was achieved by compensating for all image degrading factors including body attenuation, Compton scattering and collimator-detector response (CDR). Triple energy window method was used to estimate and eliminate the scattered photons. Iterative ordered-subsets expectation maximization (OSEM) with correction for attenuation and distance-dependent CDR was used for image reconstruction. Bilinear energy mapping is used to convert Hounsfield units in CT image to attenuation map. Organ borders were defined by the itk-SNAP toolkit segmentation on CT image. GATE was then used for internal dose calculation. The Specific Absorbed Fractions (SAFs) and S-values were reported as MIRD schema. Results: The results showed that the largest SAFs and S-values are in osseous organs as expected. S-value for lung is the highest after spine that can be important in 153-Sm therapy. Conclusion: We presented the utility of SPECT-CT images and Monte Carlo for patient-specific dosimetry as a reliable and accurate method. It has several advantages over template-based methods or simplified dose estimation methods. With advent of high speed computers, Monte Carlo can be used for treatment planning

  15. Comparison between Monte Carlo simulation and measurement with a 3D polymer gel dosimeter for dose distributions in biological samples

    Furuta, T.; Maeyama, T.; Ishikawa, K. L.; Fukunishi, N.; Fukasaku, K.; Takagi, S.; Noda, S.; Himeno, R.; Hayashi, S.

    2015-08-01

    In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning.

  16. 3D Monte Carlo particle-in-cell simulations of critical ionization velocity experiments in the ionosphere

    Proper interpretation of space based critical velocity ionization experiments depends upon understanding the expected results from in-situ or remote sensors. A three-dimensional electromagnetic Particle-in-Cell code with Monte Carlo charged particle-neutral collisions has been developed to model CIV interactions in typical neutral gas release experiments. In the model, the released neutral gas is taken to be a spherical cloud traveling with a constant density and velocity rvec υn across the geomagnetic field rvec B0. Then dynamics of the plasma ionized from the neutral cloud are studied, and the induced instabilities are discussed. The simulations show that the newly ionized plasma evolves to form an ''asymmetric sphere-sheet tail'' structure: the ions mainly drift with the neutral cloud and expand in the rvec υ x rvec B0 direction; the electrons are trapped by the magnetic field and form a curved ''sheet-like'' tail which spreads along the rvec B0 direction. The ionization rate determines the structure shape. Significant ion density enhancement occurs only in the core region of the neutral gas cloud. It is shown that the detection of CIV in an ionospheric gas release experiment critically depends on the sensor location

  17. Comparison between Monte Carlo simulation and measurement with a 3D polymer gel dosimeter for dose distributions in biological samples

    In this research, we used a 135 MeV/nucleon carbon-ion beam to irradiate a biological sample composed of fresh chicken meat and bones, which was placed in front of a PAGAT gel dosimeter, and compared the measured and simulated transverse-relaxation-rate (R2) distributions in the gel dosimeter. We experimentally measured the three-dimensional R2 distribution, which records the dose induced by particles penetrating the sample, by using magnetic resonance imaging. The obtained R2 distribution reflected the heterogeneity of the biological sample. We also conducted Monte Carlo simulations using the PHITS code by reconstructing the elemental composition of the biological sample from its computed tomography images while taking into account the dependence of the gel response on the linear energy transfer. The simulation reproduced the experimental distal edge structure of the R2 distribution with an accuracy under about 2 mm, which is approximately the same as the voxel size currently used in treatment planning. (paper)

  18. Grain size distribution and topology in 3D grain growth simulation with large-scale Monte Carlo method

    Hao Wang; Guo-quan Liu; Xiang-ge Qin

    2009-01-01

    Three-dimensional normal grain growth was appropriately simulated using a Potts model Monte Carlo algorithm.The quasi-stationary grain size distribution obtained from simulation agreed well with the experimental result of pure iron.The Weibull function with a parameter β=2.77 and the Yu-Liu function with a parameter v =2.71 fit the quasi-stationary grain size distribution well.The grain volume distribution is a function that decreased exponentially with increasing grain volume.The distribution of boundary area of grains has a peak at S/=0.5,where S is the boundary area of a grain and is the mean boundary area of all grains in the system.The lognormal function fits the face number distribution well and the peak of the face number distribution is f=10.The mean radius of f=faced grains is not proportional to the face number,but appears to be related by a curve convex upward.In the 2D cross-section,both the perimeter law and the Aboav-Weaire law are observed to hold.

  19. 3D Monte-Carlo transport calculations of whole slab reactor cores: validation of deterministic neutronic calculation routes

    Palau, J.M. [CEA Cadarache, Service de Physique des Reacteurs et du Cycle, Lab. de Projets Nucleaires, 13 - Saint-Paul-lez-Durance (France)

    2005-07-01

    This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U{sup 235}, U{sup 238}, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)

  20. 3D Monte-Carlo transport calculations of whole slab reactor cores: validation of deterministic neutronic calculation routes

    This paper presents how Monte-Carlo calculations (French TRIPOLI4 poly-kinetic code with an appropriate pre-processing and post-processing software called OVNI) are used in the case of 3-dimensional heterogeneous benchmarks (slab reactor cores) to reduce model biases and enable a thorough and detailed analysis of the performances of deterministic methods and their associated data libraries with respect to key neutron parameters (reactivity, local power). Outstanding examples of application of these tools are presented regarding the new numerical methods implemented in the French lattice code APOLLO2 (advanced self-shielding models, new IDT characteristics method implemented within the discrete-ordinates flux solver model) and the JEFF3.1 nuclear data library (checked against JEF2.2 previous file). In particular we have pointed out, by performing multigroup/point-wise TRIPOLI4 (assembly and core) calculations, the efficiency (in terms of accuracy and computation time) of the new IDT method developed in APOLLO2. In addition, by performing 3-dimensional TRIPOLI4 calculations of the whole slab core (few millions of elementary volumes), the high quality of the new JEFF3.1 nuclear data files and revised evaluations (U235, U238, Hf) for reactivity prediction of slab cores critical experiments has been stressed. As a feedback of the whole validation process, improvements in terms of nuclear data (mainly Hf capture cross-sections) and numerical methods (advanced quadrature formulas accounting validation results, validation of new self-shielding models, parallelization) are suggested to improve even more the APOLLO2-CRONOS2 standard calculation route. (author)

  1. AIRTRANS, Time-Dependent, Energy Dependent 3-D Neutron Transport, Gamma Transport in Air by Monte-Carlo

    1 - Nature of physical problem solved: The function of the AIRTRANS system is to calculate by Monte Carlo methods the radiation field produced by neutron and/or gamma-ray sources which are located in the atmosphere. The radiation field is expressed as the time - and energy-dependent flux at a maximum of 50 point detectors in the atmosphere. The system calculates un-collided fluxes analytically and collided fluxes by the 'once-more collided' flux-at-a-point technique. Energy-dependent response functions can be applied to the fluxes to obtain desired flux functionals, such as doses, at the detector point. AIRTRANS also can be employed to generate sources of secondary gamma radiation. 2 - Method of solution - Neutron interactions treated in the calculational scheme include elastic (isotropic and anisotropic) scattering, inelastic (discrete level and continuum) scattering, and absorption. Charged particle reactions, e.g, (n,p) are treated as absorptions. A built-in kernel option can be employed to take neutrons from the 150 keV to thermal energy, thus eliminating the need for particle tracking in this energy range. Another option used in conjunction with the neutron transport problem creates an 'interaction tape' which describes all the collision events that can lead to the production of secondary gamma-rays. This interaction tape subsequently can be used to generate a source of secondary gamma rays. The gamma-ray interactions considered include Compton scattering, pair production, and the photoelectric effect; the latter two processes are treated as absorption events. Incorporated in the system is an option to use a simple importance sampling technique for detectors that are many mean free paths from the source. In essence, particles which fly far from the source are split into fragments, the degree of fragmentation being proportional to the penetration distance from the source. Each fragment is tracked separately, thus increasing the percentage of computer time spent

  2. Validation of Atucha-2 PHWR helios and Relap5-3D model by Monte Carlo cell and core calculations - 335

    Within the framework of the Second Agreement 'Nucleoelectrica Argentina-SA - University of Pisa', a complex three dimensional (3D) neutron kinetics (NK) coupled thermal-hydraulic (TH) RELAP5-3D model of the Atucha 2 PHWR has been developed and validated. Homogenized cross section database was produced by the lattice physics code HELIOS. In order to increase the level of confidence on the results of such sophisticated models, an independent Monte Carlo code model, based on the MONTEBURNS package (MCNP5 + ORIGEN), has been set up. The scope of this activity is to obtain a systematic check of the deterministic codes results. This necessity is particularly felt in the case of Atucha-2 reactor modeling, since its own peculiarities (e.g., oblique Control Rods, Positive Void Coefficient) and since, if approved by the Argentinean Safety Authority, the RELAP53D 3D NK TH model will constitute the first application of a neutronic thermal-hydraulics coupled code techniques to a reactor licensing project. (authors)

  3. Correlating variability of modeling parameters with non-isothermal stack performance: Monte Carlo simulation of a portable 3D planar solid oxide fuel cell stack

    Highlights: • A Monte Carlo simulation of a SOFC stack model is conducted for sensitivity analysis. • The non-isothermal stack model allows fast computation for statistical modeling. • Modeling parameters are ranked in view of their correlations with stack performance. • Rankings are different when varying the parameters simultaneously and individually. • Rankings change with the variability of the parameters and positions in the stack. - Abstract: The development of fuel cells has progressed to portable applications recently. This paper conducts a Monte Carlo simulation (MCS) of a spatially-smoothed non-isothermal model to correlate the performance of a 3D 5-cell planar solid oxide fuel cell (P-SOFC) stack with the variability of modeling parameters regarding material and geometrical properties and operating conditions. The computationally cost-efficient P-SOFC model for the MCS captures the leading-order transport phenomena and electrochemical mechanics of the 3D stack. Sensitivity analysis is carried out in two scenarios: first, by varying modeling parameters individually, and second by varying them simultaneously. The stochastic parameters are ranked according to the strength of their correlations with global and local stack performances. As a result, different rankings are obtained for the two scenarios. Moreover, in the second scenario, the rankings change with the nominal values and variability of the stochastic parameters as well as local positions within the stack, because of compensating or reinforcing effects between the varying parameters. Apart from the P-SOFCs, the present MCS can be extended to other types of fuel cells equipped with parallel flow channels. The fast stack model allows statistical modeling of a large stack of hundreds of cells for high-power applications without a prohibitive computational cost

  4. 3D-personalized Monte Carlo dosimetry in 90Y-microspheres therapies of primary and secondary hepatic cancers: absorbed dose and biological effective dose considerations

    Full text of publication follows. Purpose: a 3D-Personalized Monte Carlo Dosimetry (PMCD) was developed for treatment planning in nuclear medicine. The method was applied to Selective Internal Radiation Therapy (SIRT) using 90Y-microspheres for unresectable hepatic cancers. Methods: The PMCD method was evaluated for 20 patients treated for hepatic metastases or hepatocellular carcinoma at the European Hospital Georges Pompidou (Paris). First, regions of interest were outlined on the patient CT images. Using the OEDIPE software, patient-specific voxel phantoms were created. 99mTc-MAA SPECT data were then used to generate 3D-matrices of cumulated activity. Absorbed doses and Biologically Effective Dose (BED) were calculated at the voxel scale using the MCNPX Monte Carlo transport code. Finally, OEDIPE was used to determine the maximum injectable activity (MIA) for tolerance criteria on organs at risk (OARs), i.e. the lungs and non tumoral liver (NTL). Tolerance criteria based on mean absorbed doses, mean BED, Dose-Volume Histograms (DVHs) or BED-Volume Histograms (BVHs) were considered. Those MIAs were compared to the Partition Model with tolerance criteria on mean absorbed doses, which is a conventional method applied in clinical practice. Results: compared to Partition Model recommendations, performing dosimetry using the PMCD method enables to increase the activity prescription while ensuring OARs' radiation protection. Moreover, tolerance criteria based on DVHs allow us to enhance treatment planning efficiency by taking advantage of the parallel characteristic of the liver and the lungs, whose functions are not impaired if the level of irradiation to a fraction of the organ is kept sufficiently low. Finally, multi-cycle treatments based on tolerance criteria on mean BED and BVHs, were considered to go further in the dose optimization, taking into account biological considerations such as cell repair or radiosensitivity. Conclusion: besides its feasibility

  5. Implementation of 3D Lattice Monte Carlo Simulation on a Cluster of Symmetric Multiprocessors%基于集群系统的3D格点Monte Carlo算法并行实现

    雷咏梅; 蒋英; 冯捷

    2002-01-01

    This paper presents a new approach to parallelize 3D lattice Monte Carlo algorithms used in the numerical simulation of polymer on ZiQiang 2000-a cluster of symmetric multiprocessors (SMPs). The combined load for cell and energy calculations over the time step is balanced together to form a single spatial decomposition. Basic aspects and strategies of running Monte Carlo calculations on parallel computers are studied. Different steps involved in porting the software on a parallel architecture based on ZiQiang 2000 running under Linux and MPI are described briefly. It is found that parallelization becomes more advantageous when either the lattice is very large or the model contains many cells and chains.

  6. Exploring Monte Carlo methods

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  7. Transmutation efficiency in the prismatic deep burner HTR concept by a 3D Monte Carlo depletion analysis

    This paper summarizes studies performed on the Deep-Burner Modular Helium Reactor (DB-MHR) concept-design. Feasibility and sensitivity studies as well as fuel-cycle studies with probabilistic methodology are presented. Current investigations on design strategies in one and two pass scenarios, and the computational tools are also presented. Computations on the prismatic concept-design were performed on a full-core 3D model basis. The probabilistic MCNP-MONTEBURNS-ORIGEN chain, with either JEF2.2 or BVI libraries, was used. One or two independently depleting media per assembly were accounted. Due to the calculation time necessary to perform MCNP5 calculations with sufficient accuracy, the different parameters of the depletion calculations have to be optimized according to the desired accuracy of the results. Three strategies were compared: the two pass with driver and transmuter fuel loading in three rings, the one pass with driver fuel only in three rings geometry and finally the one pass in four rings. The 'two pass' scenario is the best deep burner with about 70% mass reduction of actinides for the PWR discharged fuel. However the small difference obtained for incineration (∼5%) raises the question of the interest of this scenario given the difficulty of the process for TF fuel. Finally the advantage of the 'two pass' scenario is mainly the reduction of actinide activity. (author)

  8. Three-dimensional polarized Monte Carlo atmospheric radiative transfer model (3DMCPOL): 3D effects on polarized visible reflectances of a cirrus cloud

    A polarized atmospheric radiative transfer model for the computation of radiative transfer inside three-dimensional inhomogeneous mediums is described. This code is based on Monte Carlo methods and takes into account the polarization state of the light. Specificities introduced by such consideration are presented. After validation of the model by comparisons with adding-doubling computations, examples of reflectances simulated from a synthetic inhomogeneous cirrus cloud are analyzed and compared with reflectances obtained with the classical assumption of a plane parallel homogeneous cloud (1D approximation). As polarized reflectance is known to saturate for optical thickness of about 3, one could think that they should be less sensitive to 3D effects than total reflectances. However, at high spatial resolution (80 m), values of polarized reflectances much higher than the ones predicted by the 1D theory can be reached. The study of the reflectances of a step cloud shows that these large values are the results of illumination and shadowing effects similar to those often observed on total reflectances. In addition, we show that for larger spatial resolution (10 km), the so-called plane-parallel bias leads to a non-negligible overestimation of the polarized reflectances of about 7-8%.

  9. Monte Carlo Radiative Transfer

    Whitney, Barbara A

    2011-01-01

    I outline methods for calculating the solution of Monte Carlo Radiative Transfer (MCRT) in scattering, absorption and emission processes of dust and gas, including polarization. I provide a bibliography of relevant papers on methods with astrophysical applications.

  10. Monte Carlo simulations of GeoPET experiments: 3D images of tracer distributions (18F, 124I and 58Co) in Opalinus clay, anhydrite and quartz

    Zakhnini, Abdelhamid; Kulenkampff, Johannes; Sauerzapf, Sophie; Pietrzyk, Uwe; Lippmann-Pipke, Johanna

    2013-08-01

    Understanding conservative fluid flow and reactive tracer transport in soils and rock formations requires quantitative transport visualization methods in 3D+t. After a decade of research and development we established the GeoPET as a non-destructive method with unrivalled sensitivity and selectivity, with due spatial and temporal resolution by applying Positron Emission Tomography (PET), a nuclear medicine imaging method, to dense rock material. Requirements for reaching the physical limit of image resolution of nearly 1 mm are (a) a high-resolution PET-camera, like our ClearPET scanner (Raytest), and (b) appropriate correction methods for scatter and attenuation of 511 keV—photons in the dense geological material. The latter are by far more significant in dense geological material than in human and small animal body tissue (water). Here we present data from Monte Carlo simulations (MCS) reflecting selected GeoPET experiments. The MCS consider all involved nuclear physical processes of the measurement with the ClearPET-system and allow us to quantify the sensitivity of the method and the scatter fractions in geological media as function of material (quartz, Opalinus clay and anhydrite compared to water), PET isotope (18F, 58Co and 124I), and geometric system parameters. The synthetic data sets obtained by MCS are the basis for detailed performance assessment studies allowing for image quality improvements. A scatter correction method is applied exemplarily by subtracting projections of simulated scattered coincidences from experimental data sets prior to image reconstruction with an iterative reconstruction process.

  11. Monte Carlo estimation of scatter effects on quantitative myocardial blood flow and perfusable tissue fraction using 3D-PET and 15O-water

    Hirano, Yoshiyuki; Koshino, Kazuhiro; Watabe, Hiroshi; Fukushima, Kazuhito; Iida, Hidehiro

    2012-11-01

    In clinical cardiac positron emission tomography using 15O-water, significant tracer accumulation is observed not only in the heart but also in the liver and lung, which are partially outside the field-of-view. In this work, we investigated the effects of scatter on quantitative myocardium blood flow (MBF) and perfusable tissue fraction (PTF) by a precise Monte Carlo simulation (Geant4) and a numerical human model. We assigned activities to the heart, liver, and lung of the human model with varying ratios of organ activities according to an experimental time activity curve and created dynamic sinograms. The sinogram data were reconstructed by filtered backprojection. By comparing a scatter-corrected image (SC) with a true image (TRUE), we evaluated the accuracy of the scatter correction. TRUE was reconstructed using a scatter-eliminated sinogram, which can be obtained only in simulations. A scatter-uncorrected image (W/O SC) and an attenuation-uncorrected image (W/O AC) were also constructed. Finally, we calculated MBF and PTF with a single tissue-compartment model for four types of images. As a result, scatter was corrected accurately, and MBFs derived from all types of images were consistent with the MBF obtained from TRUE. Meanwhile, the PTF of only the SC was in agreement with the PTF of TRUE. From the simulation results, we concluded that quantitative MBF is less affected by scatter and absorption in 3D-PET using 15O-water. However, scatter correction is essential for accurate PTF.

  12. TH-C-12A-08: New Compact 10 MV S-Band Linear Accelerator: 3D Finite-Element Design and Monte Carlo Dose Simulations

    Purpose: To design a new compact S-band linac waveguide capable of producing a 10 MV x-ray beam, while maintaining the length (27.5 cm) of current 6 MV waveguides. This will allow higher x-ray energies to be used in our linac-MRI systems with the same footprint. Methods: Finite element software COMSOL Multiphysics was used to design an accelerator cavity matching one published in an experiment breakdown study, to ensure that our modeled cavities do not exceed the threshold electric fields published. This cavity was used as the basis for designing an accelerator waveguide, where each cavity of the full waveguide was tuned to resonate at 2.997 GHz by adjusting the cavity diameter. The RF field solution within the waveguide was calculated, and together with an electron-gun phase space generated using Opera3D/SCALA, were input into electron tracking software PARMELA to compute the electron phase space striking the x-ray target. This target phase space was then used in BEAM Monte Carlo simulations to generate percent depth doses curves for this new linac, which were then used to re-optimize the waveguide geometry. Results: The shunt impedance, Q-factor, and peak-to-mean electric field ratio were matched to those published for the breakdown study to within 0.1% error. After tuning the full waveguide, the peak surface fields are calculated to be 207 MV/m, 13% below the breakdown threshold, and a d-max depth of 2.42 cm, a D10/20 value of 1.59, compared to 2.45 cm and 1.59, respectively, for the simulated Varian 10 MV linac and brehmsstrahlung production efficiency 20% lower than a simulated Varian 10 MV linac. Conclusion: This work demonstrates the design of a functional 27.5 cm waveguide producing 10 MV photons with characteristics similar to a Varian 10 MV linac

  13. Monte Carlo transition probabilities

    Lucy, L. B.

    2001-01-01

    Transition probabilities governing the interaction of energy packets and matter are derived that allow Monte Carlo NLTE transfer codes to be constructed without simplifying the treatment of line formation. These probabilities are such that the Monte Carlo calculation asymptotically recovers the local emissivity of a gas in statistical equilibrium. Numerical experiments with one-point statistical equilibrium problems for Fe II and Hydrogen confirm this asymptotic behaviour. In addition, the re...

  14. Source convergence diagnostics using Boltzmann entropy criterion application to different OECD/NEA criticality benchmarks with the 3-D Monte Carlo code Tripoli-4

    The measurement of the stationarity of Monte Carlo fission source distributions in keff calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)

  15. Fundamentals of Monte Carlo

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of Monte Carlo. Welcome to Los Alamos, the birthplace of “Monte Carlo” for computational physics. Stanislaw Ulam, John von Neumann, and Nicholas Metropolis are credited as the founders of modern Monte Carlo methods. The name “Monte Carlo” was chosen in reference to the Monte Carlo Casino in Monaco (purportedly a place where Ulam’s uncle went to gamble). The central idea (for us) – to use computer-generated “random” numbers to determine expected values or estimate equation solutions – has since spread to many fields. "The first thoughts and attempts I made to practice [the Monte Carlo Method] were suggested by a question which occurred to me in 1946 as I was convalescing from an illness and playing solitaires. The question was what are the chances that a Canfield solitaire laid out with 52 cards will come out successfully? After spending a lot of time trying to estimate them by pure combinatorial calculations, I wondered whether a more practical method than “abstract thinking” might not be to lay it out say one hundred times and simply observe and count the number of successful plays... Later [in 1946], I described the idea to John von Neumann, and we began to plan actual calculations." - Stanislaw Ulam.

  16. Benchmarking of the 3-D CAD-based Discrete Ordinates code “ATTILA” for dose rate calculations against experiments and Monte Carlo calculations

    Shutdown dose rate (SDDR) inside and around the diagnostics ports of ITER is performed at PPPL/UCLA using the 3-D, FEM, Discrete Ordinates code, ATTILA, along with its updated FORNAX transmutation/decay gamma library. Other ITER partners assess SDDR using codes based on the Monte Carlo (MC) approach (e.g. MCNP code) for transport calculation and the radioactivity inventory code FISPACT or other equivalent decay data libraries for dose rate assessment. To reveal the range of discrepancies in the results obtained by various analysts, an extensive experimental and calculation benchmarking effort has been undertaken to validate the capability of ATTILA for dose rate assessment. On the experimental validation front, the comparison was performed using the measured data from two SDDR experiments performed at the FNG facility, Italy. Comparison was made to the experimental data and to MC results obtained by other analysts. On the calculation validation front, the ATTILA's predictions were compared to other results at key locations inside a calculation benchmark whose configuration duplicates an upper diagnostics port plug (UPP) in ITER. Both serial and parallel version of ATTILA-7.1.0 are used in the PPPL/UCLA analysis performed with FENDL-2.1/FORNAX databases. In the FNG 1st experimental, it was shown that ATTILA's dose rates are largely over estimated (by ∼30–60%) with the ANSI/ANS-6.1.1 flux-to-dose factors whereas the ICRP-74 factors give better agreement (10–20%) with the experimental data and with the MC results at all cooling times. In the 2nd experiment, there is an under estimation in SDDR calculated by both MCNP and ATTILA based on ANSI/ANS-6.1.1 for cooling times up to ∼4 days after irradiation. Thereafter, an over estimation is observed (∼5–10% with MCNP and ∼10–15% with ATTILA). As for the calculation benchmark, the agreement is much better based on ICRP-74 1996 data. The divergence among all dose rate results at ∼11 days cooling time is no

  17. Algorithmic choices in WARP – A framework for continuous energy Monte Carlo neutron transport in general 3D geometries on GPUs

    Highlights: • WARP, a GPU-accelerated Monte Carlo neutron transport code, has been developed. • The NVIDIA OptiX high-performance ray tracing library is used to process geometric data. • The unionized cross section representation is modified for higher performance. • Reference remapping is used to keep the GPU busy as neutron batch population reduces. • Reference remapping is done using a key-value radix sort on neutron reaction type. - Abstract: In recent supercomputers, general purpose graphics processing units (GPGPUs) are a significant faction of the supercomputer’s total computational power. GPGPUs have different architectures compared to central processing units (CPUs), and for Monte Carlo neutron transport codes used in nuclear engineering to take advantage of these coprocessor cards, transport algorithms must be changed to execute efficiently on them. WARP is a continuous energy Monte Carlo neutron transport code that has been written to do this. The main thrust of WARP is to adapt previous event-based transport algorithms to the new GPU hardware; the algorithmic choices for all parts of which are presented in this paper. It is found that remapping history data references increases the GPU processing rate when histories start to complete. The main reason for this is that completed data are eliminated from the address space, threads are kept busy, and memory bandwidth is not wasted on checking completed data. Remapping also allows the interaction kernels to be launched concurrently, improving efficiency. The OptiX ray tracing framework and CUDPP library are used for geometry representation and parallel dataset-side operations, ensuring high performance and reliability

  18. Monte Carlo photon benchmark problems

    Photon benchmark calculations have been performed to validate the MCNP Monte Carlo computer code. These are compared to both the COG Monte Carlo computer code and either experimental or analytic results. The calculated solutions indicate that the Monte Carlo method, and MCNP and COG in particular, can accurately model a wide range of physical problems. 8 refs., 5 figs

  19. SimulRad: a Java interface for a Monte-Carlo simulation code to visualize in 3D the early stages of water radiolysis

    Using a Fortran step-by-step Monte-Carlo simulation code of liquid water radiolysis and the Java programming language, we have developed a Java interface software, called SimulRad. This interface enables a user, in a three-dimensional environment, to either visualize the spatial distribution of all reactive species present in the track of an ionizing particle at a chosen simulation time, or present an animation of the chemical development of the particle track over a chosen time interval (between ∼10-12 and 10-6 s). It also allows one to select a particular radiation-induced cluster of species to view, in fine detail, the chemical reactions that occur between these species

  20. Fundamentals of Monte Carlo

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  1. Contributon Monte Carlo

    The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables

  2. Optimization of Monte Carlo simulations

    Bryskhe, Henrik

    2009-01-01

    This thesis considers several different techniques for optimizing Monte Carlo simulations. The Monte Carlo system used is Penelope but most of the techniques are applicable to other systems. The two mayor techniques are the usage of the graphics card to do geometry calculations, and raytracing. Using graphics card provides a very efficient way to do fast ray and triangle intersections. Raytracing provides an approximation of Monte Carlo simulation but is much faster to perform. A program was ...

  3. Quantum Gibbs ensemble Monte Carlo

    We present a path integral Monte Carlo method which is the full quantum analogue of the Gibbs ensemble Monte Carlo method of Panagiotopoulos to study the gas-liquid coexistence line of a classical fluid. Unlike previous extensions of Gibbs ensemble Monte Carlo to include quantum effects, our scheme is viable even for systems with strong quantum delocalization in the degenerate regime of temperature. This is demonstrated by an illustrative application to the gas-superfluid transition of 4He in two dimensions

  4. Monte Carlo techniques

    The course of ''Monte Carlo Techniques'' will try to give a general overview of how to build up a method based on a given theory, allowing you to compare the outcome of an experiment with that theory. Concepts related with the construction of the method, such as, random variables, distributions of random variables, generation of random variables, random-based numerical methods, will be introduced in this course. Examples of some of the current theories in High Energy Physics describing the e+e- annihilation processes (QED, Electro-Weak, QCD) will also be briefly introduced. A second step in the employment of this method is related to the detector. The interactions that a particle could have along its way, through the detector as well as the response of the different materials which compound the detector will be quoted in this course. An example of detector at LEP era, in which these techniques are being applied, will close the course. (orig.)

  5. Monte Carlo Methods in Physics

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  6. Parallelizing Monte Carlo with PMC

    Rathkopf, J.A.; Jones, T.R.; Nessett, D.M.; Stanberry, L.C.

    1994-11-01

    PMC (Parallel Monte Carlo) is a system of generic interface routines that allows easy porting of Monte Carlo packages of large-scale physics simulation codes to Massively Parallel Processor (MPP) computers. By loading various versions of PMC, simulation code developers can configure their codes to run in several modes: serial, Monte Carlo runs on the same processor as the rest of the code; parallel, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on other MPP processor(s); distributed, Monte Carlo runs in parallel across many processors of the MPP with the rest of the code running on a different machine. This multi-mode approach allows maintenance of a single simulation code source regardless of the target machine. PMC handles passing of messages between nodes on the MPP, passing of messages between a different machine and the MPP, distributing work between nodes, and providing independent, reproducible sequences of random numbers. Several production codes have been parallelized under the PMC system. Excellent parallel efficiency in both the distributed and parallel modes results if sufficient workload is available per processor. Experiences with a Monte Carlo photonics demonstration code and a Monte Carlo neutronics package are described.

  7. Successful vectorization - reactor physics Monte Carlo code

    Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)

  8. 3D coupling of Monte Carlo neutronics and thermal-hydraulics/thermic calculations as a simulation tool for innovative reactor concepts

    Simulations of new reactor designs, such as generation IV concepts, require three dimensional modeling to ensure a sufficiently realistic description for safety analysis. If precise solutions of local physical phenomena (DNBR, cross flow, form factors,...) are to be found then the use of accurate 3D coupled neutronics/thermal-hydraulics codes becomes essential. Moreover, to describe this coupled field with a high level of accuracy requires successive iterations between neutronics and thermal-hydraulics at equilibrium until convergence (power deposits and temperatures must be finely discretized, ex: pin by pin and axial discretization). In this paper we present the development and simulation results of such coupling capabilities using our code MURE (MCNP Utility for Reactor Evolution), a precision code written in C++ which automates the preparation and computation of successive MCNP calculations either for precision burnup and/or thermal-hydraulics/thermic purposes. For the thermal-hydraulics part, the code COBRA is used. It is a sub-channel code that allows steady-state and transient analysis of reactor cores. The goal is a generic, non system-specific code, for both burn-up calculations and safety analysis at any point in the fuel cycle: the eventual trajectory of an accident scenario will be sensitive to the initial distribution of fissile material and neutron poisons in the reactor (axial and radial heterogeneity). The MURE code is open-source, portable and manages all the neutronics and the thermal-hydraulics/thermic calculations in background: control is provided by the MURE interface or the user can interact directly with the codes if desired. MURE automatically builds input files and other necessary data, launches the codes and manages the communication between them. Consequently accurate 3D simulations of power plants on both global and pin level of detail with thermal feedback can be easily performed (radial and axial meshing grids are managed by MURE). A

  9. Use of Serpent Monte-Carlo code for development of 3D full-core models of Gen-IV fast spectrum reactors and preparation of safety parameters/cross-section data for transient analysis with FAST code system

    Current work presents a new methodology which uses Serpent Monte-Carlo (MC) code for generating multi-group beginning-of-life (BOL) cross section (XS) database file that is compatible with PARCS 3D reactor core simulator and allows simulation of transients with the FAST code system. The applicability of the methodology was tested on European Sodium-cooled Fast Reactor (ESFR) design with an oxide fuel proposed by CEA (France). The k-effective, power peaking factors and safety parameters (such as Doppler constant, coolant density coefficient, fuel axial expansion coefficient, diagrid expansion coefficients and control rod worth) calculated by PARCS/TRACE were compared with the results of the Serpent MC code. The comparison indicates overall reasonable agreement between conceptually different (deterministic and stochastic) codes. The new development makes it in principle possible to use the Serpent MC code for cross section generation for the PARCS code to perform transient analyses for fast reactors. The advantages and limitations of this methodology are discussed in the paper. (author)

  10. Proton Upset Monte Carlo Simulation

    O'Neill, Patrick M.; Kouba, Coy K.; Foster, Charles C.

    2009-01-01

    The Proton Upset Monte Carlo Simulation (PROPSET) program calculates the frequency of on-orbit upsets in computer chips (for given orbits such as Low Earth Orbit, Lunar Orbit, and the like) from proton bombardment based on the results of heavy ion testing alone. The software simulates the bombardment of modern microelectronic components (computer chips) with high-energy (.200 MeV) protons. The nuclear interaction of the proton with the silicon of the chip is modeled and nuclear fragments from this interaction are tracked using Monte Carlo techniques to produce statistically accurate predictions.

  11. Monte Carlo simulations in medical technology- II. Application of Monte Carlo procedure to medical technology

    Methods for Monte Carlo procedure in radiation measurement by SPECT (single photon emission computed tomography) and 3-D PET (3-dimensional positron emission tomography) are described together with its application to develop and optimize the scattering correction method in 201Tl-SPECT. In the medical technology, the Monte Carlo simulation makes it possible to quantify the behavior of a photon like scattering and absorption, and which can be performed by the use of EGS4 simulation code consisting from Step A - E. With the method, data collection procedures of the diagnostic equipments for nuclear medicine and application to develop the transmission radiation source for SPECT are described. Precision of the scattering correction method is also evaluated in the SPECT by the Monte Carlo simulation. The simulation is a useful tool for evaluating the behavior of radiation in the human body which can not be actually measured. (K.H.)

  12. Monte Carlo Particle Lists: MCPL

    Kittelmann, Thomas; Knudsen, Erik B; Willendrup, Peter; Cai, Xiao Xiao; Kanaki, Kalliopi

    2016-01-01

    A binary format with lists of particle state information, for interchanging particles between various Monte Carlo simulation applications, is presented. Portable C code for file manipulation is made available to the scientific community, along with converters and plugins for several popular simulation packages.

  13. Monte Carlo dose mapping on deforming anatomy

    Zhong, Hualiang; Siebers, Jeffrey V.

    2009-10-01

    This paper proposes a Monte Carlo-based energy and mass congruent mapping (EMCM) method to calculate the dose on deforming anatomy. Different from dose interpolation methods, EMCM separately maps each voxel's deposited energy and mass from a source image to a reference image with a displacement vector field (DVF) generated by deformable image registration (DIR). EMCM was compared with other dose mapping methods: energy-based dose interpolation (EBDI) and trilinear dose interpolation (TDI). These methods were implemented in EGSnrc/DOSXYZnrc, validated using a numerical deformable phantom and compared for clinical CT images. On the numerical phantom with an analytically invertible deformation map, EMCM mapped the dose exactly the same as its analytic solution, while EBDI and TDI had average dose errors of 2.5% and 6.0%. For a lung patient's IMRT treatment plan, EBDI and TDI differed from EMCM by 1.96% and 7.3% in the lung patient's entire dose region, respectively. As a 4D Monte Carlo dose calculation technique, EMCM is accurate and its speed is comparable to 3D Monte Carlo simulation. This method may serve as a valuable tool for accurate dose accumulation as well as for 4D dosimetry QA.

  14. Shell model Monte Carlo methods

    We review quantum Monte Carlo methods for dealing with large shell model problems. These methods reduce the imaginary-time many-body evolution operator to a coherent superposition of one-body evolutions in fluctuating one-body fields; resultant path integral is evaluated stochastically. We first discuss the motivation, formalism, and implementation of such Shell Model Monte Carlo methods. There then follows a sampler of results and insights obtained from a number of applications. These include the ground state and thermal properties of pf-shell nuclei, thermal behavior of γ-soft nuclei, and calculation of double beta-decay matrix elements. Finally, prospects for further progress in such calculations are discussed. 87 refs

  15. Kinematics of multigrid Monte Carlo

    We study the kinematics of multigrid Monte Carlo algorithms by means of acceptance rates for nonlocal Metropolis update proposals. An approximation formula for acceptance rates is derived. We present a comparison of different coarse-to-fine interpolation schemes in free field theory, where the formula is exact. The predictions of the approximation formula for several interacting models are well confirmed by Monte Carlo simulations. The following rule is found: For a critical model with fundametal Hamiltonian Η(φ), absence of critical slowing down can only be expected if the expansion of (Η(φ+ψ)) in terms of the shift ψ contains no relevant (mass) term. We also introduce a multigrid update procedure for nonabelian lattice gauge theory and study the acceptance rates for gauge group SU(2) in four dimensions. (orig.)

  16. Asynchronous Anytime Sequential Monte Carlo

    Paige, Brooks; Wood, Frank; Doucet, Arnaud; Teh, Yee Whye

    2014-01-01

    We introduce a new sequential Monte Carlo algorithm we call the particle cascade. The particle cascade is an asynchronous, anytime alternative to traditional particle filtering algorithms. It uses no barrier synchronizations which leads to improved particle throughput and memory efficiency. It is an anytime algorithm in the sense that it can be run forever to emit an unbounded number of particles while keeping within a fixed memory budget. We prove that the particle cascade is an unbiased mar...

  17. Neural Adaptive Sequential Monte Carlo

    Gu, Shixiang; Ghahramani, Zoubin; Turner, Richard E

    2015-01-01

    Sequential Monte Carlo (SMC), or particle filtering, is a popular class of methods for sampling from an intractable target distribution using a sequence of simpler intermediate distributions. Like other importance sampling-based methods, performance is critically dependent on the proposal distribution: a bad proposal can lead to arbitrarily inaccurate estimates of the target distribution. This paper presents a new method for automatically adapting the proposal using an approximation of the Ku...

  18. Adaptive Multilevel Monte Carlo Simulation

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  19. Parallel Monte Carlo reactor neutronics

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  20. Monomial Gamma Monte Carlo Sampling

    Zhang, Yizhe; Wang, Xiangyu; Chen, Changyou; Fan, Kai; Carin, Lawrence

    2016-01-01

    We unify slice sampling and Hamiltonian Monte Carlo (HMC) sampling by demonstrating their connection under the canonical transformation from Hamiltonian mechanics. This insight enables us to extend HMC and slice sampling to a broader family of samplers, called monomial Gamma samplers (MGS). We analyze theoretically the mixing performance of such samplers by proving that the MGS draws samples from a target distribution with zero-autocorrelation, in the limit of a single parameter. This propert...

  1. Non statistical Monte-Carlo

    We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems

  2. Extending canonical Monte Carlo methods

    Velazquez, L.; Curilef, S.

    2010-02-01

    In this paper, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation for the extension of the available Monte Carlo methods on the basis of the consideration of the Gibbs canonical ensemble to account for the existence of an anomalous regime with negative heat capacities C < 0. The resulting framework appears to be a suitable generalization of the methodology associated with the so-called dynamical ensemble, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sampling and the Swendsen-Wang cluster algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states q defined on a d-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of the decorrelation time τ with increase of the system size N to a weak power-law divergence \\tau \\propto N^{\\alpha } with α≈0.2 for the particular case of the 2D ten-state Potts model.

  3. Investigations on Monte Carlo based coupled core calculations

    The present trend in advanced and next generation nuclear reactor core designs is towards increased material heterogeneity and geometry complexity. The continuous energy Monte Carlo method has the capability of modeling such core environments with high accuracy. This paper presents results from feasibility studies being performed at the Pennsylvania State University (PSU) on both accelerating Monte Carlo criticality calculations by using hybrid nodal diffusion Monte Carlo schemes and thermal-hydraulic feedback modeling in Monte Carlo core calculations. The computation process is greatly accelerated by calculating the three-dimensional (3D) distributions of fission source and thermal-hydraulics parameters with the coupled NEM/COBRA-TF code and then using coupled MCNP5/COBRA-TF code to fine tune the results to obtain an increased accuracy. The PSU NEM code employs cross-sections generated by MCNP5 for pin-cell based nodal compositions. The implementation of different code modifications facilitating coupled calculations are presented first. Then the coupled hybrid Monte Carlo based code system is applied to a 3D 2*2 pin array extracted from a Boiling Water Reactor (BWR) assembly with reflective radial boundary conditions. The obtained results are discussed and it is showed that performing Monte-Carlo based coupled core steady state calculations are feasible. (authors)

  4. Forward physics Monte Carlo (FPMC)

    Boonekamp, M.; Juránek, Vojtěch; Kepka, Oldřich; Royon, C.

    Hamburg : Verlag Deutsches Elektronen-Synchrotron, 2009 - (Jung, H.; De Roeck, A.), s. 758-762 ISBN N. [HERA and the LHC workshop series on the implications of HERA for LHC physics. Geneve (CH), 26.05.2008-30.05.2008] R&D Projects: GA MŠk LC527; GA MŠk LA08032 Institutional research plan: CEZ:AV0Z10100502 Keywords : forward physics * diffraction * two-photon * Monte Carlo Subject RIV: BF - Elementary Particles and High Energy Physics http://arxiv.org/PS_cache/arxiv/pdf/0903/0903.3861v2.pdf

  5. MontePython: Implementing Quantum Monte Carlo using Python

    J.K. Nilsen

    2006-01-01

    We present a cross-language C++/Python program for simulations of quantum mechanical systems with the use of Quantum Monte Carlo (QMC) methods. We describe a system for which to apply QMC, the algorithms of variational Monte Carlo and diffusion Monte Carlo and we describe how to implement theses methods in pure C++ and C++/Python. Furthermore we check the efficiency of the implementations in serial and parallel cases to show that the overhead using Python can be negligible.

  6. Monte Carlo techniques in radiation therapy

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  7. Monte Carlo primer for health physicists

    The basic ideas and principles of Monte Carlo calculations are presented in the form of a primer for health physicists. A simple integral with a known answer is evaluated by two different Monte Carlo approaches. Random number, which underlie Monte Carlo work, are discussed, and a sample table of random numbers generated by a hand calculator is presented. Monte Carlo calculations of dose and linear energy transfer (LET) from 100-keV neutrons incident on a tissue slab are discussed. The random-number table is used in a hand calculation of the initial sequence of events for a 100-keV neutron entering the slab. Some pitfalls in Monte Carlo work are described. While this primer addresses mainly the bare bones of Monte Carlo, a final section briefly describes some of the more sophisticated techniques used in practice to reduce variance and computing time

  8. Interacting Particle Markov Chain Monte Carlo

    Rainforth, Tom; Naesseth, Christian A.; Lindsten, Fredrik; Paige, Brooks; van de Meent, Jan-Willem; Doucet, Arnaud; Wood, Frank

    2016-01-01

    We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method that introduces a coupling between multiple standard and conditional sequential Monte Carlo samplers. Like related methods, iPMCMC is a Markov chain Monte Carlo sampler on an extended space. We present empirical results that show significant improvements in mixing rates relative to both non-interacting PMCMC samplers and a single PMCMC sampler with an equivalent total computational budget. An additional advant...

  9. Mean field simulation for Monte Carlo integration

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  10. MONTE-4 for Monte Carlo simulations with high performance

    The Monte Carlo machine MONTE-4, has been developed based on the architecture of existing supercomputer with a design philosophy to realize high performance in vector-parallel processing of Monte Carlo codes for particle transport problems. The effective performance of this Monte Carlo machine is presented through practical applications of multi-group criticality safety code KENO-IV and continuous-energy neutron/photon transport code MCNP. Ten times speedup has been obtained on MONTE-4 compared with the execution time in the scalar processing. (K.A.)

  11. Multidimensional stochastic approximation Monte Carlo.

    Zablotskiy, Sergey V; Ivanov, Victor A; Paul, Wolfgang

    2016-06-01

    Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g(E), of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g(E_{1},E_{2}). We show when and why care has to be exercised when obtaining the microcanonical density of states g(E_{1}+E_{2}) from g(E_{1},E_{2}). PMID:27415383

  12. Single scatter electron Monte Carlo

    Svatos, M.M. [Lawrence Livermore National Lab., CA (United States)|Wisconsin Univ., Madison, WI (United States)

    1997-03-01

    A single scatter electron Monte Carlo code (SSMC), CREEP, has been written which bridges the gap between existing transport methods and modeling real physical processes. CREEP simulates ionization, elastic and bremsstrahlung events individually. Excitation events are treated with an excitation-only stopping power. The detailed nature of these simulations allows for calculation of backscatter and transmission coefficients, backscattered energy spectra, stopping powers, energy deposits, depth dose, and a variety of other associated quantities. Although computationally intense, the code relies on relatively few mathematical assumptions, unlike other charged particle Monte Carlo methods such as the commonly-used condensed history method. CREEP relies on sampling the Lawrence Livermore Evaluated Electron Data Library (EEDL) which has data for all elements with an atomic number between 1 and 100, over an energy range from approximately several eV (or the binding energy of the material) to 100 GeV. Compounds and mixtures may also be used by combining the appropriate element data via Bragg additivity.

  13. 1-D EQUILIBRIUM DISCRETE DIFFUSION MONTE CARLO

    T. EVANS; ET AL

    2000-08-01

    We present a new hybrid Monte Carlo method for 1-D equilibrium diffusion problems in which the radiation field coexists with matter in local thermodynamic equilibrium. This method, the Equilibrium Discrete Diffusion Monte Carlo (EqDDMC) method, combines Monte Carlo particles with spatially discrete diffusion solutions. We verify the EqDDMC method with computational results from three slab problems. The EqDDMC method represents an incremental step toward applying this hybrid methodology to non-equilibrium diffusion, where it could be simultaneously coupled to Monte Carlo transport.

  14. Monte Carlo Application ToolKit (MCATK)

    Highlights: • Component-based Monte Carlo radiation transport parallel software library. • Designed to build specialized software applications. • Provides new functionality for existing general purpose Monte Carlo transport codes. • Time-independent and time-dependent algorithms with population control. • Algorithm verification and validation results are provided. - Abstract: The Monte Carlo Application ToolKit (MCATK) is a component-based software library designed to build specialized applications and to provide new functionality for existing general purpose Monte Carlo radiation transport codes. We will describe MCATK and its capabilities along with presenting some verification and validations results

  15. Monte Carlo lattice program KIM

    The Monte Carlo program KIM solves the steady-state linear neutron transport equation for a fixed-source problem or, by successive fixed-source runs, for the eigenvalue problem, in a two-dimensional thermal reactor lattice. Fluxes and reaction rates are the main quantities computed by the program, from which power distribution and few-group averaged cross sections are derived. The simulation ranges from 10 MeV to zero and includes anisotropic and inelastic scattering in the fast energy region, the epithermal Doppler broadening of the resonances of some nuclides, and the thermalization phenomenon by taking into account the thermal velocity distribution of some molecules. Besides the well known combinatorial geometry, the program allows complex configurations to be represented by a discrete set of points, an approach greatly improving calculation speed

  16. General Monte Carlo code MONK

    The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)

  17. Monte Carlo application tool-kit (MCATK)

    The Monte Carlo Application tool-kit (MCATK) is a C++ component-based software library designed to build specialized applications and to provide new functionality for existing general purpose Monte Carlo radiation transport codes such as MCNP. We will describe MCATK and its capabilities along with presenting some verification and validations results. (authors)

  18. Common misconceptions in Monte Carlo particle transport

    Booth, Thomas E., E-mail: teb@lanl.gov [LANL, XCP-7, MS F663, Los Alamos, NM 87545 (United States)

    2012-07-15

    Monte Carlo particle transport is often introduced primarily as a method to solve linear integral equations such as the Boltzmann transport equation. This paper discusses some common misconceptions about Monte Carlo methods that are often associated with an equation-based focus. Many of the misconceptions apply directly to standard Monte Carlo codes such as MCNP and some are worth noting so that one does not unnecessarily restrict future methods. - Highlights: Black-Right-Pointing-Pointer Adjoint variety and use from a Monte Carlo perspective. Black-Right-Pointing-Pointer Misconceptions and preconceived notions about statistical weight. Black-Right-Pointing-Pointer Reasons that an adjoint based weight window sometimes works well or does not. Black-Right-Pointing-Pointer Pulse height/probability of initiation tallies and 'the' transport equation. Black-Right-Pointing-Pointer Highlights unnecessary preconceived notions about Monte Carlo transport.

  19. Modelling cerebral blood oxygenation using Monte Carlo XYZ-PA

    Zam, Azhar; Jacques, Steven L.; Alexandrov, Sergey; Li, Youzhi; Leahy, Martin J.

    2013-02-01

    Continuous monitoring of cerebral blood oxygenation is critically important for the management of many lifethreatening conditions. Non-invasive monitoring of cerebral blood oxygenation with a photoacoustic technique offers advantages over current invasive and non-invasive methods. We introduce a Monte Carlo XYZ-PA to model the energy deposition in 3D and the time-resolved pressures and velocity potential based on the energy absorbed by the biological tissue. This paper outlines the benefits of using Monte Carlo XYZ-PA for optimization of photoacoustic measurement and imaging. To the best of our knowledge this is the first fully integrated tool for photoacoustic modelling.

  20. SKIRT: the design of a suite of input models for Monte Carlo radiative transfer simulations

    Baes, Maarten

    2015-01-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can...

  1. Monte Carlo simulations of neoclassical transport in toroidal plasmas

    FORTEC-3D code, which solves the drift-kinetic equation for torus plasmas and radial electric field using the δf Monte Carlo method, has developed to study the variety of issues relating to neoclassical transport phenomena in magnetic confinement plasmas. Here the numerical techniques used in FORTEC-3D are reviewed, and resent progress in the simulation method to simulate GAM oscillation is also explained. A band-limited white noise term is introduced in the equation of time evolution of radial electric field to excite GAM oscillation, which enables us to analyze GAM frequency using FORTEC-3D even in the case the collisionless GAM damping is fast. (author)

  2. Overview of the MCU Monte Carlo software package

    Highlights: • MCU is the Monte Carlo code for particle transport in 3D systems with depletion. • Criticality and fixed source problems are solved using pure point-wise approximation. • MCU is parallelized with MPI in three different modes. • MCU has coolant, fuel and xenon feedback for VVER calculations. • MCU is verified for reactors with thermal, intermediate and fast neutron spectrum. - Abstract: MCU (Monte Carlo Universal) is a project on development and practical use of a universal computer code for simulation of particle transport (neutrons, photons, electrons, positrons) in three-dimensional systems by means of the Monte Carlo method. This paper provides the information on the current state of the project. The developed libraries of constants are briefly described, and the potentialities of the MCU-5 package modules and the executable codes compiled from them are characterized. Examples of important problems of reactor physics solved with the code are presented

  3. Use of Monte Carlo Methods in brachytherapy

    The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)

  4. Monte Carlo simulation for soot dynamics

    Zhou Kun

    2012-01-01

    Full Text Available A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  5. Monte carlo simulation for soot dynamics

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  6. Importance iteration in MORSE Monte Carlo calculations

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

  7. Fast quantum Monte Carlo on a GPU

    Lutsyshyn, Y

    2013-01-01

    We present a scheme for the parallelization of quantum Monte Carlo on graphical processing units, focusing on bosonic systems and variational Monte Carlo. We use asynchronous execution schemes with shared memory persistence, and obtain an excellent acceleration. Comparing with single core execution, GPU-accelerated code runs over x100 faster. The CUDA code is provided along with the package that is necessary to execute variational Monte Carlo for a system representing liquid helium-4. The program was benchmarked on several models of Nvidia GPU, including Fermi GTX560 and M2090, and the latest Kepler architecture K20 GPU. Kepler-specific optimization is discussed.

  8. Advanced computers and Monte Carlo

    High-performance parallelism that is currently available is synchronous in nature. It is manifested in such architectures as Burroughs ILLIAC-IV, CDC STAR-100, TI ASC, CRI CRAY-1, ICL DAP, and many special-purpose array processors designed for signal processing. This form of parallelism has apparently not been of significant value to many important Monte Carlo calculations. Nevertheless, there is much asynchronous parallelism in many of these calculations. A model of a production code that requires up to 20 hours per problem on a CDC 7600 is studied for suitability on some asynchronous architectures that are on the drawing board. The code is described and some of its properties and resource requirements ae identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resource requirements are identified to compare with corresponding properties and resources of some asynchronous multiprocessor architectures. Arguments are made for programer aids and special syntax to identify and support important asynchronous parallelism. 2 figures, 5 tables

  9. Guideline for radiation transport simulation with the Monte Carlo method

    Today, the photon and neutron transport calculations with the Monte Carlo method have been progressed with advanced Monte Carlo codes and high-speed computers. Monte Carlo simulation is rather suitable expression than the calculation. Once Monte Carlo codes become more friendly and performance of computer progresses, most of the shielding problems will be solved by using the Monte Carlo codes and high-speed computers. As those codes prepare the standard input data for some problems, the essential techniques for solving the Monte Carlo method and variance reduction techniques of the Monte Carlo calculation might lose the interests to the general Monte Carlo users. In this paper, essential techniques of the Monte Carlo method and the variance reduction techniques, such as importance sampling method, selection of estimator, and biasing technique, are described to afford a better understanding of the Monte Carlo method and Monte Carlo code. (author)

  10. KENO, Multigroup P1 Scattering Monte-Carlo Transport Calculation for Criticality, Keff, Flux in 3-D. KENO-5, SCALE-1 Module with Pn Scattering, Super-grouping, Diffusion Albedo Reflection

    1 - Description of problem or function: KENO is a multigroup, Monte Carlo criticality code containing a special geometry package which allows easy description of systems composed of cylinders, spheres, and cuboids (rectangular parallelepipeds) arranged in any order with only one restriction. They cannot be rotated or translated. Each geometrical region must be described as completely enclosing all regions interior to it. For systems not describable using this special geometry package, the program can use the generalized geometry package (GEOM) developed for the O5R Monte Carlo code. It allows any system that can be described by a collection of planes and/or quadratic surfaces, arbitrarily oriented and intersecting in arbitrary fashion. The entire problem can be mocked up in generalized geometry, or one generalized geometry unit or box type can be used alone or in combination with standard KENO units or box types. Rectangular arrays of fissile units are allowed with or without external reflector regions. Output from KENO consists of keff for the system plus an estimate of its standard deviation and the leakage, absorption, and fissions for each energy group plus the totals for all groups. Flux as a function of energy group and region and fission densities as a function of region are optional output. KENO-4: Added features include a neutron balance edit, PICTURE routines to check the input geometry, and a random number sequencing subroutine written in FORTRAN-4. 2 - Method of solution: The scattering treatment used in KENO assumes that the differential neutron scattering cross section can be represented by a P1 Legendre polynomial. Absorption of neutrons in KENO is not allowed. Instead, at each collision point of a neutron tracking history the weight of the neutron is reduced by the absorption probability. When the neutron weight has been reduced below a specified point for the region in which the collision occurs, Russian roulette is played to determine if the

  11. Monte Carlo modelling of TRIGA research reactor

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucleaires de la Maamora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S(α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file 'up259'. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  12. Monte Carlo modelling of TRIGA research reactor

    El Bakkari, B., E-mail: bakkari@gmail.co [Reactor Operating Unit (UCR), National Centre of Sciences, Energy and Nuclear Techniques (CNESTEN/CENM), POB 1382, Rabat (Morocco); ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); Nacir, B. [Reactor Operating Unit (UCR), National Centre of Sciences, Energy and Nuclear Techniques (CNESTEN/CENM), POB 1382, Rabat (Morocco); El Bardouni, T. [ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); El Younoussi, C. [Reactor Operating Unit (UCR), National Centre of Sciences, Energy and Nuclear Techniques (CNESTEN/CENM), POB 1382, Rabat (Morocco); ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); Merroun, O. [ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); Htet, A. [Reactor Technology Unit (UTR), National Centre of Sciences, Energy and Nuclear Techniques (CNESTEN/CENM), POB 1382, Rabat (Morocco); Boulaich, Y. [Reactor Operating Unit (UCR), National Centre of Sciences, Energy and Nuclear Techniques (CNESTEN/CENM), POB 1382, Rabat (Morocco); ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); Zoubair, M.; Boukhal, H. [ERSN-LMR, Department of Physics, Faculty of Sciences, POB 2121, Tetuan (Morocco); Chakir, M. [EPTN-LPMR, Faculty of Sciences, Kenitra (Morocco)

    2010-10-15

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucleaires de la Maamora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S({alpha}, {beta}) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file 'up259'. The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  13. Monte Carlo modelling of TRIGA research reactor

    El Bakkari, B.; Nacir, B.; El Bardouni, T.; El Younoussi, C.; Merroun, O.; Htet, A.; Boulaich, Y.; Zoubair, M.; Boukhal, H.; Chakir, M.

    2010-10-01

    The Moroccan 2 MW TRIGA MARK II research reactor at Centre des Etudes Nucléaires de la Maâmora (CENM) achieved initial criticality on May 2, 2007. The reactor is designed to effectively implement the various fields of basic nuclear research, manpower training, and production of radioisotopes for their use in agriculture, industry, and medicine. This study deals with the neutronic analysis of the 2-MW TRIGA MARK II research reactor at CENM and validation of the results by comparisons with the experimental, operational, and available final safety analysis report (FSAR) values. The study was prepared in collaboration between the Laboratory of Radiation and Nuclear Systems (ERSN-LMR) from Faculty of Sciences of Tetuan (Morocco) and CENM. The 3-D continuous energy Monte Carlo code MCNP (version 5) was used to develop a versatile and accurate full model of the TRIGA core. The model represents in detailed all components of the core with literally no physical approximation. Continuous energy cross-section data from the more recent nuclear data evaluations (ENDF/B-VI.8, ENDF/B-VII.0, JEFF-3.1, and JENDL-3.3) as well as S( α, β) thermal neutron scattering functions distributed with the MCNP code were used. The cross-section libraries were generated by using the NJOY99 system updated to its more recent patch file "up259". The consistency and accuracy of both the Monte Carlo simulation and neutron transport physics were established by benchmarking the TRIGA experiments. Core excess reactivity, total and integral control rods worth as well as power peaking factors were used in the validation process. Results of calculations are analysed and discussed.

  14. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  15. Monte Carlo simulations for plasma physics

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  16. Frontiers of quantum Monte Carlo workshop: preface

    The introductory remarks, table of contents, and list of attendees are presented from the proceedings of the conference, Frontiers of Quantum Monte Carlo, which appeared in the Journal of Statistical Physics

  17. Monte Carlo methods for particle transport

    Haghighat, Alireza

    2015-01-01

    The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...

  18. Monte Carlo Treatment Planning for Advanced Radiotherapy

    Cronholm, Rickard

    validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...... more sophisticated than previous algorithms since it uses delineations of structures in order to include and/or exclude certain media in various anatomical regions. This method has the potential to reduce anatomically irrelevant media assignment. In house MATLAB scripts translating the treatment plan...... presented. Comparison between dose distribution for clinical treatment plans generated by a commercial Treatment Planning System and by the implemented Monte Carlo Treatment Planning workflow were conducted. Good agreement was generally found, but for regions involving large density gradients differences of...

  19. Experience with the Monte Carlo Method

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed

  20. Monte Carlo simulation of granular fluids

    Montanero, J. M.

    2003-01-01

    An overview of recent work on Monte Carlo simulations of a granular binary mixture is presented. The results are obtained numerically solving the Enskog equation for inelastic hard-spheres by means of an extension of the well-known direct Monte Carlo simulation (DSMC) method. The homogeneous cooling state and the stationary state reached using the Gaussian thermostat are considered. The temperature ratio, the fourth velocity moments and the velocity distribution functions are obtained for bot...

  1. Monte Carlo photon transport techniques

    The basis of Monte Carlo calculation of photon transport problems is the computer simulation of individual photon histories and their subsequent averaging to provide the quantities of interest. As the history of a photon is followed the values of variables are selected and decisions made by sampling known distributions using random numbers. The transport of photon is simulated by creation of particles from a defined source region, generally with a random initial orientation in space, with tracking of particles as they travel through the system, sampling the probability density functions for their interactions to evaluate their trajectories and energy deposition at different points in the system. The interactions determine the penetration and the motion of particles. The computational model, for radiation transport problems includes geometry and material specifications. Every computer code contains a database of experimentally obtained quantities, known as cross-sections that determine the probability of a particle interacting with the medium through which it is transported. Every cross-section is peculiar to the type and energy of the incident particle and to the kind of interaction it undergoes. These partial cross-sections are summed to form the total cross-section; the ratio of the partial cross-section to the total cross-section gives the probability of this particular interaction occurring. Cross-section data for the interaction types of interest must be supplied for each material present. The model also consists of algorithms used to compute the result of interactions (changes in particle energy, direction, etc.) based on the physical principles that describe the interaction of radiation with matter and the cross-section data provided

  2. Research on GPU Acceleration for Monte Carlo Criticality Calculation

    Xu, Qi; Yu, Ganglin; Wang, Kan

    2014-06-01

    The Monte Carlo neutron transport method can be naturally parallelized by multi-core architectures due to the dependency between particles during the simulation. The GPU+CPU heterogeneous parallel mode has become an increasingly popular way of parallelism in the field of scientific supercomputing. Thus, this work focuses on the GPU acceleration method for the Monte Carlo criticality simulation, as well as the computational efficiency that GPUs can bring. The "neutron transport step" is introduced to increase the GPU thread occupancy. In order to test the sensitivity of the MC code's complexity, a 1D one-group code and a 3D multi-group general purpose code are respectively transplanted to GPUs, and the acceleration effects are compared. The result of numerical experiments shows considerable acceleration effect of the "neutron transport step" strategy. However, the performance comparison between the 1D code and the 3D code indicates the poor scalability of MC codes on GPUs.

  3. Fission Matrix Capability for MCNP Monte Carlo

    Carney, Sean E. [Los Alamos National Laboratory; Brown, Forrest B. [Los Alamos National Laboratory; Kiedrowski, Brian C. [Los Alamos National Laboratory; Martin, William R. [Los Alamos National Laboratory

    2012-09-05

    In a Monte Carlo criticality calculation, before the tallying of quantities can begin, a converged fission source (the fundamental eigenvector of the fission kernel) is required. Tallies of interest may include powers, absorption rates, leakage rates, or the multiplication factor (the fundamental eigenvalue of the fission kernel, k{sub eff}). Just as in the power iteration method of linear algebra, if the dominance ratio (the ratio of the first and zeroth eigenvalues) is high, many iterations of neutron history simulations are required to isolate the fundamental mode of the problem. Optically large systems have large dominance ratios, and systems containing poor neutron communication between regions are also slow to converge. The fission matrix method, implemented into MCNP[1], addresses these problems. When Monte Carlo random walk from a source is executed, the fission kernel is stochastically applied to the source. Random numbers are used for: distances to collision, reaction types, scattering physics, fission reactions, etc. This method is used because the fission kernel is a complex, 7-dimensional operator that is not explicitly known. Deterministic methods use approximations/discretization in energy, space, and direction to the kernel. Consequently, they are faster. Monte Carlo directly simulates the physics, which necessitates the use of random sampling. Because of this statistical noise, common convergence acceleration methods used in deterministic methods do not work. In the fission matrix method, we are using the random walk information not only to build the next-iteration fission source, but also a spatially-averaged fission kernel. Just like in deterministic methods, this involves approximation and discretization. The approximation is the tallying of the spatially-discretized fission kernel with an incorrect fission source. We address this by making the spatial mesh fine enough that this error is negligible. As a consequence of discretization we get a

  4. Pseudopotentials for quantum-Monte-Carlo-calculations; Pseudopotentiale fuer Quanten-Monte-Carlo-Rechnungen

    Burkatzki, Mark Thomas

    2008-07-01

    The author presents scalar-relativistic energy-consistent Hartree-Fock pseudopotentials for the main-group and 3d-transition-metal elements. The pseudopotentials do not exhibit a singularity at the nucleus and are therefore suitable for quantum Monte Carlo (QMC) calculations. The author demonstrates their transferability through extensive benchmark calculations of atomic excitation spectra as well as molecular properties. In particular, the author computes the vibrational frequencies and binding energies of 26 first- and second-row diatomic molecules using post Hartree-Fock methods, finding excellent agreement with the corresponding all-electron values. The author shows that the presented pseudopotentials give superior accuracy than other existing pseudopotentials constructed specifically for QMC. The localization error and the efficiency in QMC are discussed. The author also presents QMC calculations for selected atomic and diatomic 3d-transitionmetal systems. Finally, valence basis sets of different sizes (VnZ with n=D,T,Q,5 for 1st and 2nd row; with n=D,T for 3rd to 5th row; with n=D,T,Q for the 3d transition metals) optimized for the pseudopotentials are presented. (orig.)

  5. Monte Carlo strategies in scientific computing

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  6. CosmoPMC: Cosmology Population Monte Carlo

    Kilbinger, Martin; Cappe, Olivier; Cardoso, Jean-Francois; Fort, Gersende; Prunet, Simon; Robert, Christian P; Wraith, Darren

    2011-01-01

    We present the public release of the Bayesian sampling algorithm for cosmology, CosmoPMC (Cosmology Population Monte Carlo). CosmoPMC explores the parameter space of various cosmological probes, and also provides a robust estimate of the Bayesian evidence. CosmoPMC is based on an adaptive importance sampling method called Population Monte Carlo (PMC). Various cosmology likelihood modules are implemented, and new modules can be added easily. The importance-sampling algorithm is written in C, and fully parallelised using the Message Passing Interface (MPI). Due to very little overhead, the wall-clock time required for sampling scales approximately with the number of CPUs. The CosmoPMC package contains post-processing and plotting programs, and in addition a Monte-Carlo Markov chain (MCMC) algorithm. The sampling engine is implemented in the library pmclib, and can be used independently. The software is available for download at http://www.cosmopmc.info.

  7. Quantum Monte Carlo calculations of light nuclei

    Quantum Monte Carlo calculations using realistic two- and three-nucleon interactions are presented for nuclei with up to eight nucleons. We have computed the ground and a few excited states of all such nuclei with Greens function Monte Carlo (GFMC) and all of the experimentally known excited states using variational Monte Carlo (VMC). The GFMC calculations show that for a given Hamiltonian, the VMC calculations of excitation spectra are reliable, but the VMC ground-state energies are significantly above the exact values. We find that the Hamiltonian we are using (which was developed based on 3H, 4He, and nuclear matter calculations) underpredicts the binding energy of p-shell nuclei. However our results for excitation spectra are very good and one can see both shell-model and collective spectra resulting from fundamental many-nucleon calculations. Possible improvements in the three-nucleon potential are also be discussed

  8. SPQR: a Monte Carlo reactor kinetics code

    The SPQR Monte Carlo code has been developed to analyze fast reactor core accident problems where conventional methods are considered inadequate. The code is based on the adiabatic approximation of the quasi-static method. This initial version contains no automatic material motion or feedback. An existing Monte Carlo code is used to calculate the shape functions and the integral quantities needed in the kinetics module. Several sample problems have been devised and analyzed. Due to the large statistical uncertainty associated with the calculation of reactivity in accident simulations, the results, especially at later times, differ greatly from deterministic methods. It was also found that in large uncoupled systems, the Monte Carlo method has difficulty in handling asymmetric perturbations

  9. Quantum Monte Carlo with Variable Spins

    Melton, Cody A; Mitas, Lubos

    2016-01-01

    We investigate the inclusion of variable spins in electronic structure quantum Monte Carlo, with a focus on diffusion Monte Carlo with Hamiltonians that include spin-orbit interactions. Following our previous introduction of fixed-phase spin-orbit diffusion Monte Carlo (FPSODMC), we thoroughly discuss the details of the method and elaborate upon its technicalities. We present a proof for an upper-bound property for complex nonlocal operators, which allows for the implementation of T-moves to ensure the variational property. We discuss the time step biases associated with our particular choice of spin representation. Applications of the method are also presented for atomic and molecular systems. We calculate the binding energies and geometry of the PbH and Sn$_2$ molecules, as well as the electron affinities of the 6$p$ row elements in close agreement with experiments.

  10. Interaction picture density matrix quantum Monte Carlo

    The recently developed density matrix quantum Monte Carlo (DMQMC) algorithm stochastically samples the N-body thermal density matrix and hence provides access to exact properties of many-particle quantum systems at arbitrary temperatures. We demonstrate that moving to the interaction picture provides substantial benefits when applying DMQMC to interacting fermions. In this first study, we focus on a system of much recent interest: the uniform electron gas in the warm dense regime. The basis set incompleteness error at finite temperature is investigated and extrapolated via a simple Monte Carlo sampling procedure. Finally, we provide benchmark calculations for a four-electron system, comparing our results to previous work where possible

  11. Monte Carlo modeling of Tajoura reactor

    From neutronics point of view, reactor modeling is concerned with the determination of the reactor neutronic parameters which can be obtained through the solution of the neutron transport equation. The attractiveness of the Monte Carlo method is in its capability of handling geometrically complicated problems and due to the nature of the method a large number of particles can be tracked from birth to death before any statistically significant results can be obtained. In this paper the MCNP, a Monte Carlo code, is implemented in the modeling of the Tajoura reactor. (author)

  12. Monte Carlo dose computation for IMRT optimization*

    Laub, W.; Alber, M.; Birkner, M.; Nüsslin, F.

    2000-07-01

    A method which combines the accuracy of Monte Carlo dose calculation with a finite size pencil-beam based intensity modulation optimization is presented. The pencil-beam algorithm is employed to compute the fluence element updates for a converging sequence of Monte Carlo dose distributions. The combination is shown to improve results over the pencil-beam based optimization in a lung tumour case and a head and neck case. Inhomogeneity effects like a broader penumbra and dose build-up regions can be compensated for by intensity modulation.

  13. Monte Carlo electron/photon transport

    A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs

  14. Geodesic Monte Carlo on Embedded Manifolds.

    Byrne, Simon; Girolami, Mark

    2013-12-01

    Markov chain Monte Carlo methods explicitly defined on the manifold of probability distributions have recently been established. These methods are constructed from diffusions across the manifold and the solution of the equations describing geodesic flows in the Hamilton-Jacobi representation. This paper takes the differential geometric basis of Markov chain Monte Carlo further by considering methods to simulate from probability distributions that themselves are defined on a manifold, with common examples being classes of distributions describing directional statistics. Proposal mechanisms are developed based on the geodesic flows over the manifolds of support for the distributions, and illustrative examples are provided for the hypersphere and Stiefel manifold of orthonormal matrices. PMID:25309024

  15. Monte Carlo simulation of granular fluids

    Montanero, J M

    2003-01-01

    An overview of recent work on Monte Carlo simulations of a granular binary mixture is presented. The results are obtained numerically solving the Enskog equation for inelastic hard-spheres by means of an extension of the well-known direct Monte Carlo simulation (DSMC) method. The homogeneous cooling state and the stationary state reached using the Gaussian thermostat are considered. The temperature ratio, the fourth velocity moments and the velocity distribution functions are obtained for both cases. The shear viscosity characterizing the momentum transport in the thermostatted case is calculated as well. The simulation results are compared with analytical predictions showing an excellent agreement.

  16. Monte Carlo applications to radiation shielding problems

    Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling of physical and mathematical systems to compute their results. However, basic concepts of MC are both simple and straightforward and can be learned by using a personal computer. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. In Monte Carlo simulation of radiation transport, the history (track) of a particle is viewed as a random sequence of free flights that end with an interaction event where the particle changes its direction of movement, loses energy and, occasionally, produces secondary particles. The Monte Carlo simulation of a given experimental arrangement (e.g., an electron beam, coming from an accelerator and impinging on a water phantom) consists of the numerical generation of random histories. To simulate these histories we need an interaction model, i.e., a set of differential cross sections (DCS) for the relevant interaction mechanisms. The DCSs determine the probability distribution functions (pdf) of the random variables that characterize a track; 1) free path between successive interaction events, 2) type of interaction taking place and 3) energy loss and angular deflection in a particular event (and initial state of emitted secondary particles, if any). Once these pdfs are known, random histories can be generated by using appropriate sampling methods. If the number of generated histories is large enough, quantitative information on the transport process may be obtained by simply averaging over the simulated histories. The Monte Carlo method yields the same information as the solution of the Boltzmann transport equation, with the same interaction model, but is easier to implement. In particular, the simulation of radiation

  17. Monte Carlo dose distributions for radiosurgery

    The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)

  18. Monte Carlo simulation of neutron scattering instruments

    A library of Monte Carlo subroutines has been developed for the purpose of design of neutron scattering instruments. Using small-angle scattering as an example, the philosophy and structure of the library are described and the programs are used to compare instruments at continuous wave (CW) and long-pulse spallation source (LPSS) neutron facilities. The Monte Carlo results give a count-rate gain of a factor between 2 and 4 using time-of-flight analysis. This is comparable to scaling arguments based on the ratio of wavelength bandwidth to resolution width

  19. Fast sequential Monte Carlo methods for counting and optimization

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  20. Use of Monte Carlo Methods in brachytherapy; Uso del metodo de Monte Carlo en braquiterapia

    Granero Cabanero, D.

    2015-07-01

    The Monte Carlo method has become a fundamental tool for brachytherapy dosimetry mainly because no difficulties associated with experimental dosimetry. In brachytherapy the main handicap of experimental dosimetry is the high dose gradient near the present sources making small uncertainties in the positioning of the detectors lead to large uncertainties in the dose. This presentation will review mainly the procedure for calculating dose distributions around a fountain using the Monte Carlo method showing the difficulties inherent in these calculations. In addition we will briefly review other applications of the method of Monte Carlo in brachytherapy dosimetry, as its use in advanced calculation algorithms, calculating barriers or obtaining dose applicators around. (Author)

  1. Monte Carlo methods in AB initio quantum chemistry quantum Monte Carlo for molecules

    Lester, William A; Reynolds, PJ

    1994-01-01

    This book presents the basic theory and application of the Monte Carlo method to the electronic structure of atoms and molecules. It assumes no previous knowledge of the subject, only a knowledge of molecular quantum mechanics at the first-year graduate level. A working knowledge of traditional ab initio quantum chemistry is helpful, but not essential.Some distinguishing features of this book are: Clear exposition of the basic theory at a level to facilitate independent study. Discussion of the various versions of the theory: diffusion Monte Carlo, Green's function Monte Carlo, and release n

  2. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    Liang, Faming

    2009-03-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  3. Fast Monte Carlo-assisted simulation of cloudy Earth backgrounds

    Adler-Golden, Steven; Richtsmeier, Steven C.; Berk, Alexander; Duff, James W.

    2012-11-01

    A calculation method has been developed for rapidly synthesizing radiometrically accurate ultraviolet through longwavelengthinfrared spectral imagery of the Earth for arbitrary locations and cloud fields. The method combines cloudfree surface reflectance imagery with cloud radiance images calculated from a first-principles 3-D radiation transport model. The MCScene Monte Carlo code [1-4] is used to build a cloud image library; a data fusion method is incorporated to speed convergence. The surface and cloud images are combined with an upper atmospheric description with the aid of solar and thermal radiation transport equations that account for atmospheric inhomogeneity. The method enables a wide variety of sensor and sun locations, cloud fields, and surfaces to be combined on-the-fly, and provides hyperspectral wavelength resolution with minimal computational effort. The simulations agree very well with much more time-consuming direct Monte Carlo calculations of the same scene.

  4. Therapeutic Applications of Monte Carlo Calculations in Nuclear Medicine

    Coulot, J

    2003-08-07

    remarks to be made, about the goal and general organization of the discussion. First, the book could not be considered to be strictly about the Monte Carlo method, but maybe also internal dosimetry and related Monte Carlo issues. Then, it must be noted that the discussion would sometimes have been clearer if SI units had been used instead of rad, or mCi, especially for European readers. There are some confusing features, which could lead to misconceptions, since sometimes authors refer to treatment planning softwares as Monte Carlo codes. If the precious contribution of a software like MIRDOSE to the field of radiation protection dosimetry must be underlined, it should not be considered, strictly speaking, as a Monte Carlo code. It would have been more interesting and relevant to provide a more exhaustive review of Monte Carlo codes (history of the code, transport algorithm, pros and cons), and to make a separate chapter for treatment planning and radiation protection softwares (3D-ID, MABDOS, MIRDOSE3) which are of clinical routine interest. However, this book is very interesting, of practical interest, and it should have its utility in all modern nuclear medicine departments interested in dosimetry, providing up-to-date data and references. It should be viewed as a good and well-documented handbook, or as a general introduction for beginners and students. (book review)

  5. A Monte Carlo simulation of photomultiplier resolution

    A Monte Carlo simulation of dynode statistics has been used to generate multiphotoelectron distributions to compare with actual photomultiplier resolution results. In place of Poission of Polya statistics, in this novel approach, the basis for the simulation is an experimentally determined single electron response. The relevance of this method to the study of intrinsic line widths of scintillators is discussed

  6. Using CIPSI nodes in diffusion Monte Carlo

    Caffarel, Michel; Giner, Emmanuel; Scemama, Anthony

    2016-01-01

    Several aspects of the recently proposed DMC-CIPSI approach consisting in using selected Configuration Interaction (SCI) approaches such as CIPSI (Configuration Interaction using a Perturbative Selection done Iteratively) to build accurate nodes for diffusion Monte Carlo (DMC) calculations are presented and discussed. The main ideas are illustrated with a number of calculations for diatomics molecules and for the benchmark G1 set.

  7. Coded aperture optimization using Monte Carlo simulations

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  8. Monte Carlo methods beyond detailed balance

    Schram, Raoul D.; Barkema, Gerard T.

    2015-01-01

    Monte Carlo algorithms are nearly always based on the concept of detailed balance and ergodicity. In this paper we focus on algorithms that do not satisfy detailed balance. We introduce a general method for designing non-detailed balance algorithms, starting from a conventional algorithm satisfying

  9. A note on simultaneous Monte Carlo tests

    Hahn, Ute

    In this short note, Monte Carlo tests of goodness of fit for data of the form X(t), t ∈ I are considered, that reject the null hypothesis if X(t) leaves an acceptance region bounded by an upper and lower curve for some t in I. A construction of the acceptance region is proposed that complies to a...

  10. Accelerating Hasenbusch's acceleration of hybrid Monte Carlo

    Hasenbusch has proposed splitting the pseudo-fermionic action into two parts, in order to speed-up Hybrid Monte Carlo simulations of QCD. We have tested a different splitting, also using clover-improved Wilson fermions. An additional speed-up between 5 and 20% over the original proposal was achieved in production runs. (orig.)

  11. A comparison of Monte Carlo generators

    Golan, Tomasz

    2014-01-01

    A comparison of GENIE, NEUT, NUANCE, and NuWro Monte Carlo neutrino event generators is presented using a set of four observables: protons multiplicity, total visible energy, most energetic proton momentum, and $\\pi^+$ two-dimensional energy vs cosine distribution.

  12. Scalable Domain Decomposed Monte Carlo Particle Transport

    O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)

    2013-12-05

    In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.

  13. Parallel processing Monte Carlo radiation transport codes

    Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine

  14. Monte Carlo simulation of the microcanonical ensemble

    We consider simulating statistical systems with a random walk on a constant energy surface. This combines features of deterministic molecular dynamics techniques and conventional Monte Carlo simulations. For discrete systems the method can be programmed to run an order of magnitude faster than other approaches. It does not require high quality random numbers and may also be useful for nonequilibrium studies. 10 references

  15. Extending canonical Monte Carlo methods: II

    We have previously presented a methodology for extending canonical Monte Carlo methods inspired by a suitable extension of the canonical fluctuation relation C = β2(δE2) compatible with negative heat capacities, C α, as is shown in the particular case of the 2D seven-state Potts model where the exponent α = 0.14–0.18

  16. Monte Carlo Renormalization Group: a review

    The logic and the methods of Monte Carlo Renormalization Group (MCRG) are reviewed. A status report of results for 4-dimensional lattice gauge theories derived using MCRG is presented. Existing methods for calculating the improved action are reviewed and evaluated. The Gupta-Cordery improved MCRG method is described and compared with the standard one. 71 refs., 8 figs

  17. Burnup calculation methodology in the serpent 2 Monte Carlo code

    This paper presents two topics related to the burnup calculation capabilities in the Serpent 2 Monte Carlo code: advanced time-integration methods and improved memory management, accomplished by the use of different optimization modes. The development of the introduced methods is an important part of re-writing the Serpent source code, carried out for the purpose of extending the burnup calculation capabilities from 2D assembly-level calculations to large 3D reactor-scale problems. The progress is demonstrated by repeating a PWR test case, originally carried out in 2009 for the validation of the newly-implemented burnup calculation routines in Serpent 1. (authors)

  18. Studying the information content of TMDs using Monte Carlo generators

    Avakian, H. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Matevosyan, H. [The Univ. of Adelaide, Adelaide (Australia); Pasquini, B. [Univ. of Pavia, Pavia (Italy); Schweitzer, P. [Univ. of Connecticut, Storrs, CT (United States)

    2015-02-05

    Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this topical review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications.

  19. Studying the information content of TMDs using Monte Carlo generators

    Theoretical advances in studies of the nucleon structure have been spurred by recent measurements of spin and/or azimuthal asymmetries worldwide. One of the main challenges still remaining is the extraction of the parton distribution functions, generalized to describe transverse momentum and spatial distributions of partons from these observables with no or minimal model dependence. In this review we present the latest developments in the field with emphasis on requirements for Monte Carlo event generators, indispensable for studies of the complex 3D nucleon structure, and discuss examples of possible applications. (paper)

  20. Optimization of Monte Carlo algorithms and ray tracing on GPUs

    To take advantage of the computational power of GPUs (Graphical Processing Units), algorithms that work well on CPUs must be modified to conform to the GPU execution model. In this study, typical task-parallel Monte Carlo algorithms have been reformulated in a data-parallel way, and the benefits of doing so are examined. We were able to show that the data-parallel approach greatly improves thread coherency and keeps thread blocks busy, improving GPU utilization compared to the task-parallel approach. Data-parallel does not, however, outperform the task-parallel approach in regards to speedup over CPU. Regarding the ray-tracing acceleration, OptiX shows promise for providing enough ray tracing speed to be used in a full 3D Monte Carlo neutron transport code for reactor calculations. It is important to note that it is necessary to operate on large datasets of particle histories in order to have good performance in both OptiX and the data-parallel algorithm since this reduces the impact of latency. Our paper also shows the need to rewrite standard Monte Carlo algorithms in order to take full advantage of these new, powerful processor architectures

  1. Hybrid Monte Carlo with Chaotic Mixing

    Kadakia, Nirag

    2016-01-01

    We propose a hybrid Monte Carlo (HMC) technique applicable to high-dimensional multivariate normal distributions that effectively samples along chaotic trajectories. The method is predicated on the freedom of choice of the HMC momentum distribution, and due to its mixing properties, exhibits sample-to-sample autocorrelations that decay far faster than those in the traditional hybrid Monte Carlo algorithm. We test the methods on distributions of varying correlation structure, finding that the proposed technique produces superior covariance estimates, is less reliant on step-size tuning, and can even function with sparse or no momentum re-sampling. The method presented here is promising for more general distributions, such as those that arise in Bayesian learning of artificial neural networks and in the state and parameter estimation of dynamical systems.

  2. Monte Carlo simulation of gas Cerenkov detectors

    Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier

  3. No-compromise reptation quantum Monte Carlo

    Since its publication, the reptation quantum Monte Carlo algorithm of Baroni and Moroni (1999 Phys. Rev. Lett. 82 4745) has been applied to several important problems in physics, but its mathematical foundations are not well understood. We show that their algorithm is not of typical Metropolis-Hastings type, and we specify conditions required for the generated Markov chain to be stationary and to converge to the intended distribution. The time-step bias may add up, and in many applications it is only the middle of a reptile that is the most important. Therefore, we propose an alternative, 'no-compromise reptation quantum Monte Carlo' to stabilize the middle of the reptile. (fast track communication)

  4. Fast Lattice Monte Carlo Simulations of Polymers

    Wang, Qiang; Zhang, Pengfei

    2014-03-01

    The recently proposed fast lattice Monte Carlo (FLMC) simulations (with multiple occupancy of lattice sites (MOLS) and Kronecker δ-function interactions) give much faster/better sampling of configuration space than both off-lattice molecular simulations (with pair-potential calculations) and conventional lattice Monte Carlo simulations (with self- and mutual-avoiding walk and nearest-neighbor interactions) of polymers.[1] Quantitative coarse-graining of polymeric systems can also be performed using lattice models with MOLS.[2] Here we use several model systems, including polymer melts, solutions, blends, as well as confined and/or grafted polymers, to demonstrate the great advantages of FLMC simulations in the study of equilibrium properties of polymers.

  5. Status of Monte Carlo at Los Alamos

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner

  6. Monte Carlo study of real time dynamics

    Alexandru, Andrei; Bedaque, Paulo F; Vartak, Sohan; Warrington, Neill C

    2016-01-01

    Monte Carlo studies involving real time dynamics are severely restricted by the sign problem that emerges from highly oscillatory phase of the path integral. In this letter, we present a new method to compute real time quantities on the lattice using the Schwinger-Keldysh formalism via Monte Carlo simulations. The key idea is to deform the path integration domain to a complex manifold where the phase oscillations are mild and the sign problem is manageable. We use the previously introduced "contraction algorithm" to create a Markov chain on this alternative manifold. We substantiate our approach by analyzing the quantum mechanical anharmonic oscillator. Our results are in agreement with the exact ones obtained by diagonalization of the Hamiltonian. The method we introduce is generic and in principle applicable to quantum field theory albeit very slow. We discuss some possible improvements that should speed up the algorithm.

  7. Monte Carlo Simulation for Particle Detectors

    Pia, Maria Grazia

    2012-01-01

    Monte Carlo simulation is an essential component of experimental particle physics in all the phases of its life-cycle: the investigation of the physics reach of detector concepts, the design of facilities and detectors, the development and optimization of data reconstruction software, the data analysis for the production of physics results. This note briefly outlines some research topics related to Monte Carlo simulation, that are relevant to future experimental perspectives in particle physics. The focus is on physics aspects: conceptual progress beyond current particle transport schemes, the incorporation of materials science knowledge relevant to novel detection technologies, functionality to model radiation damage, the capability for multi-scale simulation, quantitative validation and uncertainty quantification to determine the predictive power of simulation. The R&D on simulation for future detectors would profit from cooperation within various components of the particle physics community, and synerg...

  8. Multilevel Monte Carlo Approaches for Numerical Homogenization

    Efendiev, Yalchin R.

    2015-10-01

    In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.

  9. Coevolution Based Adaptive Monte Carlo Localization (CEAMCL

    Luo Ronghua

    2008-11-01

    Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.

  10. Monte Carlo Shell Model Mass Predictions

    The nuclear mass calculation is discussed in terms of large-scale shell model calculations. First, the development and limitations of the conventional shell model calculations are mentioned. In order to overcome the limitations, the Quantum Monte Carlo Diagonalization (QMCD) method has been proposed. The basic formulation and features of the QMCD method are presented as well as its application to the nuclear shell model, referred to as Monte Carlo Shell Model (MCSM). The MCSM provides us with a breakthrough in shell model calculations: the structure of low-lying states can be studied with realistic interactions for a nearly unlimited variety of nuclei. Thus, the MCSM can contribute significantly to the study of nuclear masses. An application to N∼20 unstable nuclei far from the β-stability line is mentioned

  11. A Monte Carlo solution to skyshine radiation

    A Monte Carlo method was used to calculate the skyshine doses from 2-ft exposure cell ceiling of an accelerator. Modifications were made to the Monte Carlo program MORSE code to perform this analysis. Adjoint mode calculations provided optimum Russian roulette and splitting parameters which were later used in the forward mode calculations. Russian roulette and splitting were used at the collision sites and at boundary crossings. Exponential transform was used for particle pathlength stretching. The TIGER code was used to generate the anisotropic source term and P5 Legendre expansion was used to compute the cross sections. Where negative fluxes occured at detector locations due to large angle scatterings, a macroscopic cross section data bank was used to make Klein-Nishina and pair production flux estimates. With the above modifications, sixty detectors at locations ranging from 10 to 300 ft from the cell wall showed good statistical responses (5 to 10% fsd)

  12. Composite biasing in Monte Carlo radiative transfer

    Baes, Maarten; Lunttila, Tuomas; Bianchi, Simone; Camps, Peter; Juvela, Mika; Kuiper, Rolf

    2016-01-01

    Biasing or importance sampling is a powerful technique in Monte Carlo radiative transfer, and can be applied in different forms to increase the accuracy and efficiency of simulations. One of the drawbacks of the use of biasing is the potential introduction of large weight factors. We discuss a general strategy, composite biasing, to suppress the appearance of large weight factors. We use this composite biasing approach for two different problems faced by current state-of-the-art Monte Carlo radiative transfer codes: the generation of photon packages from multiple components, and the penetration of radiation through high optical depth barriers. In both cases, the implementation of the relevant algorithms is trivial and does not interfere with any other optimisation techniques. Through simple test models, we demonstrate the general applicability, accuracy and efficiency of the composite biasing approach. In particular, for the penetration of high optical depths, the gain in efficiency is spectacular for the spe...

  13. Quantum Monte Carlo calculations for carbon nanotubes

    Luu, Thomas; Lähde, Timo A.

    2016-04-01

    We show how lattice quantum Monte Carlo can be applied to the electronic properties of carbon nanotubes in the presence of strong electron-electron correlations. We employ the path-integral formalism and use methods developed within the lattice QCD community for our numerical work. Our lattice Hamiltonian is closely related to the hexagonal Hubbard model augmented by a long-range electron-electron interaction. We apply our method to the single-quasiparticle spectrum of the (3,3) armchair nanotube configuration, and consider the effects of strong electron-electron correlations. Our approach is equally applicable to other nanotubes, as well as to other carbon nanostructures. We benchmark our Monte Carlo calculations against the two- and four-site Hubbard models, where a direct numerical solution is feasible.

  14. Status of Monte Carlo at Los Alamos

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time

  15. Monte Carlo modeling of liquid scinttilation spectra

    Šimek, Ondřej; Šídlová, V.; Světlík, Ivo; Tomášková, Lenka

    Praha : ČVUT v Praze, 2007, s. 90-93. ISBN 978-80-01-03901-4. [Dny radiační ochrany /29./. Kouty nad Desnou, Hrubý Jeseník (CZ), 05.11.2007-09.11.2007] Institutional research plan: CEZ:AV0Z10480505 Keywords : Monte Carlo modelling * liquid scintillation spectra * energy deposition Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders

  16. Monte Carlo Simulations of Star Clusters

    Giersz, M

    2000-01-01

    A revision of Stod\\'o{\\l}kiewicz's Monte Carlo code is used to simulate evolution of large star clusters. The survey on the evolution of multi-mass N-body systems influenced by the tidal field of a parent galaxy and by stellar evolution is discussed. For the first time, the simulation on the "star-by-star" bases of evolution of 1,000,000 body star cluster is presented. \\

  17. Replica Exchange for Reactive Monte Carlo Simulations

    Turner, C.H.; Brennan, J.K.; Lísal, Martin

    2007-01-01

    Roč. 111, č. 43 (2007), s. 15706-15715. ISSN 1932-7447 R&D Projects: GA ČR GA203/05/0725; GA AV ČR 1ET400720409; GA AV ČR 1ET400720507 Institutional research plan: CEZ:AV0Z40720504 Keywords : monte carlo * simulation * reactive system Subject RIV: CF - Physical ; Theoretical Chemistry

  18. A Ballistic Monte Carlo Approximation of {\\pi}

    Dumoulin, Vincent

    2014-01-01

    We compute a Monte Carlo approximation of {\\pi} using importance sampling with shots coming out of a Mossberg 500 pump-action shotgun as the proposal distribution. An approximated value of 3.136 is obtained, corresponding to a 0.17% error on the exact value of {\\pi}. To our knowledge, this represents the first attempt at estimating {\\pi} using such method, thus opening up new perspectives towards computing mathematical constants using everyday tools.

  19. Topological zero modes in Monte Carlo simulations

    We present an improvement of global Metropolis updating steps, the instanton hits, used in a hybrid Monte Carlo simulation of the two-flavor Schwinger model with staggered fermions. These hits are designed to change the topological sector of the gauge field. In order to match these hits to an unquenched simulation with pseudofermions, the approximate zero mode structure of the lattice Dirac operator has to be considered explicitly. (orig.)

  20. The lund Monte Carlo for jet fragmentation

    We present a Monte Carlo program based on the Lund model for jet fragmentation. Quark, gluon, diquark and hadron jets are considered. Special emphasis is put on the fragmentation of colour singlet jet systems, for which energy, momentum and flavour are conserved explicitly. The model for decays of unstable particles, in particular the weak decay of heavy hadrons, is described. The central part of the paper is a detailed description on how to use the FORTRAN 77 program. (Author)

  1. New Dynamic Monte Carlo Renormalization Group Method

    Lacasse, Martin-D.; Vinals, Jorge; Grant, Martin

    1992-01-01

    The dynamical critical exponent of the two-dimensional spin-flip Ising model is evaluated by a Monte Carlo renormalization group method involving a transformation in time. The results agree very well with a finite-size scaling analysis performed on the same data. The value of $z = 2.13 \\pm 0.01$ is obtained, which is consistent with most recent estimates.

  2. Autocorrelations in hybrid Monte Carlo simulations

    Simulations of QCD suffer from severe critical slowing down towards the continuum limit. This problem is known to be prominent in the topological charge, however, all observables are affected to various degree by these slow modes in the Monte Carlo evolution. We investigate the slowing down in high statistics simulations and propose a new error analysis method, which gives a realistic estimate of the contribution of the slow modes to the errors. (orig.)

  3. Monte Carlo methods for preference learning

    Viappiani, P.

    2012-01-01

    Utility elicitation is an important component of many applications, such as decision support systems and recommender systems. Such systems query the users about their preferences and give recommendations based on the system’s belief about the utility function. Critical to these applications is th...... is the acquisition of prior distribution about the utility parameters and the possibility of real time Bayesian inference. In this paper we consider Monte Carlo methods for these problems....

  4. The Moment Guided Monte Carlo Method

    Degond, Pierre; Dimarco, Giacomo; Pareschi, Lorenzo

    2009-01-01

    In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the p...

  5. Handbook of Markov chain Monte Carlo

    Brooks, Steve

    2011-01-01

    ""Handbook of Markov Chain Monte Carlo"" brings together the major advances that have occurred in recent years while incorporating enough introductory material for new users of MCMC. Along with thorough coverage of the theoretical foundations and algorithmic and computational methodology, this comprehensive handbook includes substantial realistic case studies from a variety of disciplines. These case studies demonstrate the application of MCMC methods and serve as a series of templates for the construction, implementation, and choice of MCMC methodology.

  6. Simulated Annealing using Hybrid Monte Carlo

    Salazar, Rafael; Toral, Raúl

    1997-01-01

    We propose a variant of the simulated annealing method for optimization in the multivariate analysis of differentiable functions. The method uses global actualizations via the hybrid Monte Carlo algorithm in their generalized version for the proposal of new configurations. We show how this choice can improve upon the performance of simulated annealing methods (mainly when the number of variables is large) by allowing a more effective searching scheme and a faster annealing schedule.

  7. Coevolution Based Adaptive Monte Carlo Localization (CEAMCL)

    Luo Ronghua; Hong Bingrong

    2004-01-01

    An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the unce...

  8. Lookahead Strategies for Sequential Monte Carlo

    Lin, Ming; Chen, Rong; Liu, Jun

    2013-01-01

    Based on the principles of importance sampling and resampling, sequential Monte Carlo (SMC) encompasses a large set of powerful techniques dealing with complex stochastic dynamic systems. Many of these systems possess strong memory, with which future information can help sharpen the inference about the current state. By providing theoretical justification of several existing algorithms and introducing several new ones, we study systematically how to construct efficient SMC algorithms to take ...

  9. Archimedes, the Free Monte Carlo simulator

    Sellier, Jean Michel D.

    2012-01-01

    Archimedes is the GNU package for Monte Carlo simulations of electron transport in semiconductor devices. The first release appeared in 2004 and since then it has been improved with many new features like quantum corrections, magnetic fields, new materials, GUI, etc. This document represents the first attempt to have a complete manual. Many of the Physics models implemented are described and a detailed description is presented to make the user able to write his/her own input deck. Please, fee...

  10. A Monte Carlo for BFKL Physics

    Orr, Lynne H.; Stirling, W. J.

    2000-01-01

    Virtual photon scattering in e^+e^- collisions can result in events with the electron-positron pair at large rapidity separation with hadronic activity in between. The BFKL equation resums large logarithms that dominate the cross section for this process. We report here on a Monte Carlo method for solving the BFKL equation that allows kinematic constraints to be taken into account. The application to e^+e^- collisions is in progress.

  11. On adaptive Markov chain Monte Carlo algorithms

    Atchadé, Yves F.; Rosenthal, Jeffrey S.

    2005-01-01

    We look at adaptive Markov chain Monte Carlo algorithms that generate stochastic processes based on sequences of transition kernels, where each transition kernel is allowed to depend on the history of the process. We show under certain conditions that the stochastic process generated is ergodic, with appropriate stationary distribution. We use this result to analyse an adaptive version of the random walk Metropolis algorithm where the scale parameter σ is sequentially adapted using a Robbins-...

  12. Tracklength biassing in Monte Carlo radiation transport

    Tracklength stretching is employed in deep penetration Monte Carlo studies for variance reduction. Incorporating a dependence of the biassing on the angular disposition of the track improves the procedure. Linear and exponential forms for this dependence are investigated here, using Spanier's self-learning technique. Suitable biassing parameters are worked out for representative shield systems, for use in practical simulations. Of the two, we find that the exponential scheme performs better. (orig.)

  13. Introduction to the Monte Carlo methods

    Codes illustrating the use of Monte Carlo methods in high energy physics such as the inverse transformation method, the ejection method, the particle propagation through the nucleus, the particle interaction with the nucleus, etc. are presented. A set of useful algorithms of random number generators is given (the binomial distribution, the Poisson distribution, β-distribution, γ-distribution and normal distribution). 5 figs., 1 tab

  14. Monte Carlo simulation and numerical integration

    Geweke, John F.

    1995-01-01

    This is a survey of simulation methods in economics, with a specific focus on integration problems. It describes acceptance methods, importance sampling procedures, and Markov chain Monte Carlo methods for simulation from univariate and multivariate distributions and their application to the approximation of integrals. The exposition gives emphasis to combinations of different approaches and assessment of the accuracy of numerical approximations to integrals and expectations. The survey illus...

  15. Quantum Monte Carlo for vibrating molecules

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H2O and C3 vibrational states, using 7 PES's, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H2O and C3. In order to construct accurate trial wavefunctions for C3, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C3 the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C3 PES's suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies

  16. jTracker and Monte Carlo Comparison

    Selensky, Lauren; SeaQuest/E906 Collaboration

    2015-10-01

    SeaQuest is designed to observe the characteristics and behavior of `sea-quarks' in a proton by reconstructing them from the subatomic particles produced in a collision. The 120 GeV beam from the main injector collides with a fixed target and then passes through a series of detectors which records information about the particles produced in the collision. However, this data becomes meaningful only after it has been processed, stored, analyzed, and interpreted. Several programs are involved in this process. jTracker (sqerp) reads wire or hodoscope hits and reconstructs the tracks of potential dimuon pairs from a run, and Geant4 Monte Carlo simulates dimuon production and background noise from the beam. During track reconstruction, an event must meet the criteria set by the tracker to be considered a viable dimuon pair; this ensures that relevant data is retained. As a check, a comparison between a new version of jTracker and Monte Carlo was made in order to see how accurately jTracker could reconstruct the events created by Monte Carlo. In this presentation, the results of the inquest and their potential effects on the programming will be shown. This work is supported by U.S. DOE MENP Grant DE-FG02-03ER41243.

  17. Monte Carlo small-sample perturbation calculations

    Two different Monte Carlo methods have been developed for benchmark computations of small-sample-worths in simplified geometries. The first is basically a standard Monte Carlo perturbation method in which neutrons are steered towards the sample by roulette and splitting. One finds, however, that two variance reduction methods are required to make this sort of perturbation calculation feasible. First, neutrons that have passed through the sample must be exempted from roulette. Second, neutrons must be forced to undergo scattering collisions in the sample. Even when such methods are invoked, however, it is still necessary to exaggerate the volume fraction of the sample by drastically reducing the size of the core. The benchmark calculations are then used to test more approximate methods, and not directly to analyze experiments. In the second method the flux at the surface of the sample is assumed to be known. Neutrons entering the sample are drawn from this known flux and tracking by Monte Carlo. The effect of the sample or the fission rate is then inferred from the histories of these neutrons. The characteristics of both of these methods are explored empirically

  18. A new method for commissioning Monte Carlo treatment planning systems

    Aljarrah, Khaled Mohammed

    2005-11-01

    The Monte Carlo method is an accurate method for solving numerical problems in different fields. It has been used for accurate radiation dose calculation for radiation treatment of cancer. However, the modeling of an individual radiation beam produced by a medical linear accelerator for Monte Carlo dose calculation, i.e., the commissioning of a Monte Carlo treatment planning system, has been the bottleneck for the clinical implementation of Monte Carlo treatment planning. In this study a new method has been developed to determine the parameters of the initial electron beam incident on the target for a clinical linear accelerator. The interaction of the initial electron beam with the accelerator target produces x-ray and secondary charge particles. After successive interactions in the linac head components, the x-ray photons and the secondary charge particles interact with the patient's anatomy and deliver dose to the region of interest. The determination of the initial electron beam parameters is important for estimating the delivered dose to the patients. These parameters, such as beam energy and radial intensity distribution, are usually estimated through a trial and error process. In this work an easy and efficient method was developed to determine these parameters. This was accomplished by comparing calculated 3D dose distributions for a grid of assumed beam energies and radii in a water phantom with measurements data. Different cost functions were studied to choose the appropriate function for the data comparison. The beam parameters were determined on the light of this method. Due to the assumption that same type of linacs are exactly the same in their geometries and only differ by the initial phase space parameters, the results of this method were considered as a source data to commission other machines of the same type.

  19. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  20. Monte Carlo modelling for individual monitoring

    Full text: Individual monitoring techniques provide suitable tools for the estimate of personal dose equivalent Hp(d), representative of the effective dose, in case of external irradiation, or the evaluation of the committed effective dose by inference from activity measurements, in case of internal contamination. In both these fields Monte Carlo techniques play a crucial role: they can provide a series of parameters that are usually difficult, sometimes impossible, to be assessed experimentally. The aim of this paper is to give a panoramic view of Monte Carlo studies in external exposures individual monitoring field; internal dosimetry applications are briefly summarized in another paper. The operative practice in the field of occupational exposure relies on the employment of personal dosemeters to be worn appropriately on the body in order to guarantee a reliable estimate of the radiation protection quantities (i.e. effective dose or equivalent dose). Personal dosemeters are calibrated in terms of the ICRU operational quantity personal dose equivalent, Hp(d), that should, in principle, represent a reasonably conservative approximation of the radiation protection quantity (this condition is not fulfilled in a specific neutron energy range). All the theoretical and practical implementation of photon individual monitoring relies on two main aspects: the definition of the operational quantities and the calculation of the corresponding conversion coefficients for the field quantities (fluence and air kerma); the characterization of individual dosemeters in terms of these operational quantities with the associated energy and angular type test evaluations carried out on suitable calibration phantoms. For the first aspect (evaluation of conversion coefficients) rather exhaustive tabulations of Monte Carlo evaluated conversion coefficients has been published in ICRP and ICRU reports as well as in the open literature. For the second aspect (type test and calibration

  1. Quantum Monte Carlo for vibrating molecules

    Brown, W.R. [Univ. of California, Berkeley, CA (United States). Chemistry Dept.]|[Lawrence Berkeley National Lab., CA (United States). Chemical Sciences Div.

    1996-08-01

    Quantum Monte Carlo (QMC) has successfully computed the total electronic energies of atoms and molecules. The main goal of this work is to use correlation function quantum Monte Carlo (CFQMC) to compute the vibrational state energies of molecules given a potential energy surface (PES). In CFQMC, an ensemble of random walkers simulate the diffusion and branching processes of the imaginary-time time dependent Schroedinger equation in order to evaluate the matrix elements. The program QMCVIB was written to perform multi-state VMC and CFQMC calculations and employed for several calculations of the H{sub 2}O and C{sub 3} vibrational states, using 7 PES`s, 3 trial wavefunction forms, two methods of non-linear basis function parameter optimization, and on both serial and parallel computers. In order to construct accurate trial wavefunctions different wavefunctions forms were required for H{sub 2}O and C{sub 3}. In order to construct accurate trial wavefunctions for C{sub 3}, the non-linear parameters were optimized with respect to the sum of the energies of several low-lying vibrational states. In order to stabilize the statistical error estimates for C{sub 3} the Monte Carlo data was collected into blocks. Accurate vibrational state energies were computed using both serial and parallel QMCVIB programs. Comparison of vibrational state energies computed from the three C{sub 3} PES`s suggested that a non-linear equilibrium geometry PES is the most accurate and that discrete potential representations may be used to conveniently determine vibrational state energies.

  2. A Monte Carlo approach to water management

    Koutsoyiannis, D.

    2012-04-01

    Common methods for making optimal decisions in water management problems are insufficient. Linear programming methods are inappropriate because hydrosystems are nonlinear with respect to their dynamics, operation constraints and objectives. Dynamic programming methods are inappropriate because water management problems cannot be divided into sequential stages. Also, these deterministic methods cannot properly deal with the uncertainty of future conditions (inflows, demands, etc.). Even stochastic extensions of these methods (e.g. linear-quadratic-Gaussian control) necessitate such drastic oversimplifications of hydrosystems that may make the obtained results irrelevant to the real world problems. However, a Monte Carlo approach is feasible and can form a general methodology applicable to any type of hydrosystem. This methodology uses stochastic simulation to generate system inputs, either unconditional or conditioned on a prediction, if available, and represents the operation of the entire system through a simulation model as faithful as possible, without demanding a specific mathematical form that would imply oversimplifications. Such representation fully respects the physical constraints, while at the same time it evaluates the system operation constraints and objectives in probabilistic terms, and derives their distribution functions and statistics through Monte Carlo simulation. As the performance criteria of a hydrosystem operation will generally be highly nonlinear and highly nonconvex functions of the control variables, a second Monte Carlo procedure, implementing stochastic optimization, is necessary to optimize system performance and evaluate the control variables of the system. The latter is facilitated if the entire representation is parsimonious, i.e. if the number of control variables is kept at a minimum by involving a suitable system parameterization. The approach is illustrated through three examples for (a) a hypothetical system of two reservoirs

  3. Monte Carlo simulation for Kaonic deuterium studies

    Full text: The SIDDHARTA experiment at the DAFNE collider measured the shift and with of the ground level in kaonic hydrogen caused by the strong interaction between the kaons and protons. The measurement of the X-ray transitions to the 1s level in kaonic deuterium will allow, together with the available results from kaonic hydrogen, to extract the isospin- dependent antikaon-nucleon scattering lengths. I will present the Monte Carlo simulation of the SIDDHARTA-2 setup, in the framework of GEANT4. The program is used to optimize the critical parameters of the setup in order to perform the kaonic deuterium measurement. (author)

  4. Monte Carlo simulations for heavy ion dosimetry

    Geithner, Oksana

    2006-01-01

    Water-to-air stopping power ratio ( ) calculations for the ionization chamber dosimetry of clinically relevant ion beams with initial energies from 50 to 450 MeV/u have been performed using the Monte Carlo technique. To simulate the transport of a particle in water the computer code SHIELD-HIT v2 was used which is a substantially modified version of its predecessor SHIELD-HIT v1. The code was partially rewritten, replacing formerly used single precision variables with double precision variabl...

  5. Monte Carlo methods for applied scientists

    Dimov, Ivan T

    2007-01-01

    The Monte Carlo method is inherently parallel and the extensive and rapid development in parallel computers, computational clusters and grids has resulted in renewed and increasing interest in this method. At the same time there has been an expansion in the application areas and the method is now widely used in many important areas of science including nuclear and semiconductor physics, statistical mechanics and heat and mass transfer. This book attempts to bridge the gap between theory and practice concentrating on modern algorithmic implementation on parallel architecture machines. Although

  6. Variation After Response in Quantum Monte Carlo

    Neuscamman, Eric

    2016-01-01

    We present a new method for modeling electronically excited states that overcomes a key failing of linear response theory by allowing the underlying ground state ansatz to relax in the presence of an excitation. The method is variational, has a cost similar to ground state variational Monte Carlo, and admits both open and periodic boundary conditions. We present preliminary numerical results showing that, when paired with the Jastrow antisymmetric geminal power ansatz, the variation-after-response formalism delivers accuracies for valence and charge transfer single excitations on par with equation of motion coupled cluster, while surpassing even this very high-level method's accuracy for excitations with significant doubly excited character.

  7. Monte Carlo method in radiation transport problems

    In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media

  8. Introduction to Monte-Carlo method

    We recall first some well known facts about random variables and sampling. Then we define the Monte-Carlo method in the case where one wants to compute a given integral. Afterwards, we ship to discrete Markov chains for which we define random walks, and apply to finite difference approximations of diffusion equations. Finally we consider Markov chains with continuous state (but discrete time), transition probabilities and random walks, which are the main piece of this work. The applications are: diffusion and advection equations, and the linear transport equation with scattering

  9. Monte Carlo simulation of block copolymer brushes

    We studied a simplified model of a polymer brush formed by linear chains, which were restricted to a simple cubic lattice. The chain macromolecules consisted of a sequence of two kinds of segment, arranged in a specific sequence. The chains were grafted to an impenetrable surface, i.e. they were terminally attached to the surface at one end. The number of chains was varied from low to high grafting density. The model system was studied under different solvent quality, from good to poor solvent. The properties of this model system were studied by means of Monte Carlo simulations. The sampling algorithm was based on local changes of the chain's conformations

  10. by means of FLUKA Monte Carlo method

    Ermis Elif Ebru

    2015-01-01

    Full Text Available Calculations of gamma-ray mass attenuation coefficients of various detector materials (crystals were carried out by means of FLUKA Monte Carlo (MC method at different gamma-ray energies. NaI, PVT, GSO, GaAs and CdWO4 detector materials were chosen in the calculations. Calculated coefficients were also compared with the National Institute of Standards and Technology (NIST values. Obtained results through this method were highly in accordance with those of the NIST values. It was concluded from the study that FLUKA MC method can be an alternative way to calculate the gamma-ray mass attenuation coefficients of the detector materials.

  11. Modulated pulse bathymetric lidar Monte Carlo simulation

    Luo, Tao; Wang, Yabo; Wang, Rong; Du, Peng; Min, Xia

    2015-10-01

    A typical modulated pulse bathymetric lidar system is investigated by simulation using a modulated pulse lidar simulation system. In the simulation, the return signal is generated by Monte Carlo method with modulated pulse propagation model and processed by mathematical tools like cross-correlation and digital filter. Computer simulation results incorporating the modulation detection scheme reveal a significant suppression of the water backscattering signal and corresponding target contrast enhancement. More simulation experiments are performed with various modulation and reception variables to investigate the effect of them on the bathymetric system performance.

  12. Monte Carlo Simulation of an American Option

    Gikiri Thuo

    2007-04-01

    Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.

  13. Monte-Carlo simulations: FLUKA vs. MCNPX

    Oden, M.; Krása, Antonín; Majerle, Mitja; Svoboda, Ondřej; Wagner, Vladimír

    Melville : AMER INST PHYSICS, 2007 - (Granja, C.; Leroy, C.; Štekl, I.), s. 219-221 ISBN 978-0-7354-0472-4. ISSN 0094-243X. - (AIP Conference Proceedings. 958). [4th International Summer School on Nuclear Physics Methods and Accelerators in Biology and Medicine . Praha (CZ), 08.07.2007-19.07.2007] R&D Projects: GA MŠk(CZ) LC07050 Institutional research plan: CEZ:AV0Z10480505 Keywords : neutron production * spallation reaction * Monte-Carlo simulation Subject RIV: BG - Nuclear , Atomic and Molecular Physics, Colliders

  14. The Moment Guided Monte Carlo Method

    Degond, Pierre; Pareschi, Lorenzo

    2009-01-01

    In this work we propose a new approach for the numerical simulation of kinetic equations through Monte Carlo schemes. We introduce a new technique which permits to reduce the variance of particle methods through a matching with a set of suitable macroscopic moment equations. In order to guarantee that the moment equations provide the correct solutions, they are coupled to the kinetic equation through a non equilibrium term. The basic idea, on which the method relies, consists in guiding the particle positions and velocities through moment equations so that the concurrent solution of the moment and kinetic models furnishes the same macroscopic quantities.

  15. Markov chains analytic and Monte Carlo computations

    Graham, Carl

    2014-01-01

    Markov Chains: Analytic and Monte Carlo Computations introduces the main notions related to Markov chains and provides explanations on how to characterize, simulate, and recognize them. Starting with basic notions, this book leads progressively to advanced and recent topics in the field, allowing the reader to master the main aspects of the classical theory. This book also features: Numerous exercises with solutions as well as extended case studies.A detailed and rigorous presentation of Markov chains with discrete time and state space.An appendix presenting probabilistic notions that are nec

  16. Discovering correlated fermions using quantum Monte Carlo.

    Wagner, Lucas K; Ceperley, David M

    2016-09-01

    It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior. PMID:27518859

  17. Exascale Monte Carlo R&D

    Marcus, Ryan C. [Los Alamos National Laboratory

    2012-07-24

    Overview of this presentation is (1) Exascale computing - different technologies, getting there; (2) high-performance proof-of-concept MCMini - features and results; and (3) OpenCL toolkit - Oatmeal (OpenCL Automatic Memory Allocation Library) - purpose and features. Despite driver issues, OpenCL seems like a good, hardware agnostic tool. MCMini demonstrates the possibility for GPGPU-based Monte Carlo methods - it shows great scaling for HPC application and algorithmic equivalence. Oatmeal provides a flexible framework to aid in the development of scientific OpenCL codes.

  18. Kinetic Monte Carlo simulation of dislocation dynamics

    A kinetic Monte Carlo simulation of dislocation motion is introduced. The dislocations are assumed to be composed of pure edge and screw segments confined to a fixed lattice. The stress and temperature dependence of the dislocation velocity is studied, and finite-size effects are discussed. It is argued that surfaces and boundaries may play a significant role in the velocity of dislocations. The simulated dislocations are shown to display kinetic roughening according to the exponents predicted by the Kardar-Parisi-Zhang equation. copyright 1999 The American Physical Society

  19. Monte Carlo Simulation of Quantum Computation

    Cerf, N. J.; Koonin, S. E.

    1997-01-01

    The many-body dynamics of a quantum computer can be reduced to the time evolution of non-interacting quantum bits in auxiliary fields by use of the Hubbard-Stratonovich representation of two-bit quantum gates in terms of one-bit gates. This makes it possible to perform the stochastic simulation of a quantum algorithm, based on the Monte Carlo evaluation of an integral of dimension polynomial in the number of quantum bits. As an example, the simulation of the quantum circuit for the Fast Fouri...

  20. Archimedes, the Free Monte Carlo simulator

    Sellier, Jean Michel D

    2012-01-01

    Archimedes is the GNU package for Monte Carlo simulations of electron transport in semiconductor devices. The first release appeared in 2004 and since then it has been improved with many new features like quantum corrections, magnetic fields, new materials, GUI, etc. This document represents the first attempt to have a complete manual. Many of the Physics models implemented are described and a detailed description is presented to make the user able to write his/her own input deck. Please, feel free to contact the author if you want to contribute to the project.

  1. Monte Carlo modelling for neutron guide losses

    In modern research reactors, neutron guides are commonly used for beam conducting. The neutron guide is a well polished or equivalently smooth glass tube covered inside by sputtered or evaporated film of natural Ni or 58Ni isotope where the neutrons are totally reflected. A Monte Carlo calculation was carried out to establish the real efficiency and the spectral as well as spatial distribution of the neutron beam at the end of a glass mirror guide. The losses caused by mechanical inaccuracy and mirror quality were considered and the effects due to the geometrical arrangement were analyzed. (author) 2 refs.; 2 figs

  2. Discovering correlated fermions using quantum Monte Carlo

    Wagner, Lucas K.; Ceperley, David M.

    2016-09-01

    It has become increasingly feasible to use quantum Monte Carlo (QMC) methods to study correlated fermion systems for realistic Hamiltonians. We give a summary of these techniques targeted at researchers in the field of correlated electrons, focusing on the fundamentals, capabilities, and current status of this technique. The QMC methods often offer the highest accuracy solutions available for systems in the continuum, and, since they address the many-body problem directly, the simulations can be analyzed to obtain insight into the nature of correlated quantum behavior.

  3. Development of Monte Carlo machine for particle transport problem

    Monte Carlo machine, Monte-4 has been developed to realize high performance computing of Monte Carlo codes for particle transport. The calculation for particle tracking in a complex geometry requires (1) classification of particles by the region types using multi-way conditional branches, and (2) determination whether intersections of particle paths with surfaces of the regions are on the boundaries of the regions or not, using nests of conditional branches. However, these procedures require scalar operations or unusual vector operations. Thus the speedup ratios have been low, i.e. nearly two times, in vector processing of Monte Carlo codes for particle transport on conventional vector processors. The Monte Carlo machine Monte-4 has been equipped with the special hardware called Monte Carlo pipelines to process these procedures with high performance. Additionally Monte-4 has been equipped with enhanced load/store pipelines to realize fast transfer of indirectly addressed data for the purpose of resolving imbalances between the performance of data transfers and arithmetic operations in vector processing of Monte Carlo codes on conventional vector processors. Finally, Monte-4 has a parallel processing capability with four processors to multiply the performance of vector processing. We have evaluated the effective performance of Monte-4 using production-level Monte Carlo codes such as vectorized KENO-IV and MCNP. In the performance evaluation, nearly ten times speedup ratios have been obtained, compared with scalar processing of the original codes. (author)

  4. Unbiased combinations of nonanalog Monte Carlo techniques and fair games

    Historically, Monte Carlo variance reduction techniques have developed one at a time in response to calculational needs. This paper provides the theoretical basis for obtaining unbiased Monte Carlo estimates from all possible combinations of variance reduction techniques. Hitherto, the techniques have not been proven to be unbiased in arbitrary combinations. The authors are unaware of any Monte Carlo techniques (in any linear process) that are not treated by the theorem herein. (author)

  5. Temperature variance study in Monte-Carlo photon transport theory

    We study different Monte-Carlo methods for solving radiative transfer problems, and particularly Fleck's Monte-Carlo method. We first give the different time-discretization schemes and the corresponding stability criteria. Then we write the temperature variance as a function of the variances of temperature and absorbed energy at the previous time step. Finally we obtain some stability criteria for the Monte-Carlo method in the stationary case

  6. Monte Carlo likelihood inference for missing data models

    Sung, Yun Ju; Geyer, Charles J.

    2007-01-01

    We describe a Monte Carlo method to approximate the maximum likelihood estimate (MLE), when there are missing data and the observed data likelihood is not available in closed form. This method uses simulated missing data that are independent and identically distributed and independent of the observed data. Our Monte Carlo approximation to the MLE is a consistent and asymptotically normal estimate of the minimizer θ* of the Kullback–Leibler information, as both Monte Carlo and observed data sa...

  7. Monte Carlo Hamiltonian: Generalization to Quantum Field Theory

    Luo, Xiang-Qian; Jirari, H.; Kroger, H; Moriarty, K.

    2001-01-01

    Monte Carlo techniques with importance sampling have been extensively applied to lattice gauge theory in the Lagrangian formulation. Unfortunately, it is extremely difficult to compute the excited states using the conventional Monte Carlo algorithm. Our recently developed approach: the Monte Carlo Hamiltonian method, has been designed to overcome the difficulties of the conventional approach. In this paper, we extend the method to many body systems and quantum field theory. The Klein-Gordon f...

  8. Alternative Monte Carlo Approach for General Global Illumination

    徐庆; 李朋; 徐源; 孙济洲

    2004-01-01

    An alternative Monte Carlo strategy for the computation of global illumination problem was presented.The proposed approach provided a new and optimal way for solving Monte Carlo global illumination based on the zero variance importance sampling procedure. A new importance driven Monte Carlo global illumination algorithm in the framework of the new computing scheme was developed and implemented. Results, which were obtained by rendering test scenes, show that this new framework and the newly derived algorithm are effective and promising.

  9. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    Densmore, Jeffrey D [Los Alamos National Laboratory; Kelly, Thompson G [Los Alamos National Laboratory; Urbatish, Todd J [Los Alamos National Laboratory

    2010-11-17

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency-integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique.

  10. Neutron transport calculations using Quasi-Monte Carlo methods

    Moskowitz, B.S.

    1997-07-01

    This paper examines the use of quasirandom sequences of points in place of pseudorandom points in Monte Carlo neutron transport calculations. For two simple demonstration problems, the root mean square error, computed over a set of repeated runs, is found to be significantly less when quasirandom sequences are used ({open_quotes}Quasi-Monte Carlo Method{close_quotes}) than when a standard Monte Carlo calculation is performed using only pseudorandom points.