Monte Carlo methods for particle transport
Haghighat, Alireza
2015-01-01
The Monte Carlo method has become the de facto standard in radiation transport. Although powerful, if not understood and used appropriately, the method can give misleading results. Monte Carlo Methods for Particle Transport teaches appropriate use of the Monte Carlo method, explaining the method's fundamental concepts as well as its limitations. Concise yet comprehensive, this well-organized text: * Introduces the particle importance equation and its use for variance reduction * Describes general and particle-transport-specific variance reduction techniques * Presents particle transport eigenvalue issues and methodologies to address these issues * Explores advanced formulations based on the author's research activities * Discusses parallel processing concepts and factors affecting parallel performance Featuring illustrative examples, mathematical derivations, computer algorithms, and homework problems, Monte Carlo Methods for Particle Transport provides nuclear engineers and scientists with a practical guide ...
Scalable Domain Decomposed Monte Carlo Particle Transport
Energy Technology Data Exchange (ETDEWEB)
O' Brien, Matthew Joseph [Univ. of California, Davis, CA (United States)
2013-12-05
In this dissertation, we present the parallel algorithms necessary to run domain decomposed Monte Carlo particle transport on large numbers of processors (millions of processors). Previous algorithms were not scalable, and the parallel overhead became more computationally costly than the numerical simulation.
Present status of vectorization for particle transport Monte Carlo
International Nuclear Information System (INIS)
Martin, W.R.
1987-01-01
The conventional particle transport Monte Carlo algorithm is ill-suited for modern vector supercomputers. This history-based algorithm is not amenable to vectorization due to the random nature of the particle transport process, which inhibits the construction of vectors that are necessary for efficient utilization of a vector (pipelined) processor. An alternative algorithm, the event-based algorithm, is suitable for vectorization and has been used by several researchers in recent years to achieve impressive gains (5-20) in performance on modern vector supercomputers. This paper describes the event-based algorithm in some detail and discusses several implementations of this algorithm for specific applications in particle transport, including photon transport in a nuclear fusion plasma and neutron transport in a nuclear reactor. A discussion of the relative merits of these alternative approaches is included. A short discussion of the implementation of Monte Carlo methods on parallel processors, in particular multiple vector processors such as the Cray X-MP/48 and the IBM 3090/400, is included. The paper concludes with some thoughts regarding the potential of massively parallel processors (vector and scalar) for Monte Carlo simulation
Baräo, Fernando; Nakagawa, Masayuki; Távora, Luis; Vaz, Pedro
2001-01-01
This book focusses on the state of the art of Monte Carlo methods in radiation physics and particle transport simulation and applications, the latter involving in particular, the use and development of electron--gamma, neutron--gamma and hadronic codes. Besides the basic theory and the methods employed, special attention is paid to algorithm development for modeling, and the analysis of experiments and measurements in a variety of fields ranging from particle to medical physics.
Monte Carlo Particle Transport: Algorithm and Performance Overview
International Nuclear Information System (INIS)
Gentile, N.; Procassini, R.; Scott, H.
2005-01-01
Monte Carlo methods are frequently used for neutron and radiation transport. These methods have several advantages, such as relative ease of programming and dealing with complex meshes. Disadvantages include long run times and statistical noise. Monte Carlo photon transport calculations also often suffer from inaccuracies in matter temperature due to the lack of implicitness. In this paper we discuss the Monte Carlo algorithm as it is applied to neutron and photon transport, detail the differences between neutron and photon Monte Carlo, and give an overview of the ways the numerical method has been modified to deal with issues that arise in photon Monte Carlo simulations
Parallelization of a Monte Carlo particle transport simulation code
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
New features of the mercury Monte Carlo particle transport code
International Nuclear Information System (INIS)
Procassini, Richard; Brantley, Patrick; Dawson, Shawn
2010-01-01
Several new capabilities have been added to the Mercury Monte Carlo transport code over the past four years. The most important algorithmic enhancement is a general, extensible infrastructure to support source, tally and variance reduction actions. For each action, the user defines a phase space, as well as any number of responses that are applied to a specified event. Tallies are accumulated into a correlated, multi-dimensional. Cartesian-product result phase space. Our approach employs a common user interface to specify the data sets and distributions that define the phase, response and result for each action. Modifications to the particle trackers include the use of facet halos (instead of extrapolative fuzz) for robust tracking, and material interface reconstruction for use in shape overlaid meshes. Support for expected-value criticality eigenvalue calculations has also been implemented. Computer science enhancements include an in-line Python interface for user customization of problem setup and output. (author)
Energy Technology Data Exchange (ETDEWEB)
O' Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
Development of general-purpose particle and heavy ion transport monte carlo code
International Nuclear Information System (INIS)
Iwase, Hiroshi; Nakamura, Takashi; Niita, Koji
2002-01-01
The high-energy particle transport code NMTC/JAM, which has been developed at JAERI, was improved for the high-energy heavy ion transport calculation by incorporating the JQMD code, the SPAR code and the Shen formula. The new NMTC/JAM named PHITS (Particle and Heavy-Ion Transport code System) is the first general-purpose heavy ion transport Monte Carlo code over the incident energies from several MeV/nucleon to several GeV/nucleon. (author)
International Nuclear Information System (INIS)
Wang Guozhong; Zhang Junjun; Xiong Jian
2010-01-01
MCAM (Monte Carlo Automatic Modeling program for particle transport simulation) was developed by FDS Team as a CAD based bi-directional interface program between general CAD systems and Monte Carlo particle transport simulation codes. The physics and material modeling and void space modeling functions were improved and the free form surfaces processing function was developed recently. The applications to the ITER (International Thermonuclear Experimental Reactor) building model and FFHR (Force Free Helical Reactor) model have demonstrated the feasibility, effectiveness and maturity of MCAM latest version for nuclear applications with complex geometry. (author)
PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code
International Nuclear Information System (INIS)
Iandola, F.N.; O'Brien, M.J.; Procassini, R.J.
2010-01-01
Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.
International Nuclear Information System (INIS)
Noack, K.
1982-01-01
The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method
Monte Carlo particle simulation and finite-element techniques for tandem mirror transport
International Nuclear Information System (INIS)
Rognlien, T.D.; Cohen, B.I.; Matsuda, Y.; Stewart, J.J. Jr.
1985-12-01
A description is given of numerical methods used in the study of axial transport in tandem mirrors owing to Coulomb collisions and rf diffusion. The methods are Monte Carlo particle simulations and direct solution to the Fokker-Planck equations by finite-element expansion. 11 refs
Energy Technology Data Exchange (ETDEWEB)
Walsh, Jonathan A., E-mail: walshjon@mit.edu [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Palmer, Todd S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, Todd J. [XTD-IDA: Theoretical Design, Integrated Design and Assessment, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2015-12-15
Highlights: • Generation of discrete differential scattering angle and energy loss cross sections. • Gauss–Radau quadrature utilizing numerically computed cross section moments. • Development of a charged particle transport capability in the Milagro IMC code. • Integration of cross section generation and charged particle transport capabilities. - Abstract: We investigate a method for numerically generating discrete scattering cross sections for use in charged particle transport simulations. We describe the cross section generation procedure and compare it to existing methods used to obtain discrete cross sections. The numerical approach presented here is generalized to allow greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data computed with this method compare favorably with discrete data generated with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code, Milagro. We verify the implementation of charged particle transport in Milagro with analytic test problems and we compare calculated electron depth–dose profiles with another particle transport code that has a validated electron transport capability. Finally, we investigate the integration of the new discrete cross section generation method with the charged particle transport capability in Milagro.
Chain segmentation for the Monte Carlo solution of particle transport problems
International Nuclear Information System (INIS)
Ragheb, M.M.H.
1984-01-01
A Monte Carlo approach is proposed where the random walk chains generated in particle transport simulations are segmented. Forward and adjoint-mode estimators are then used in conjunction with the firstevent source density on the segmented chains to obtain multiple estimates of the individual terms of the Neumann series solution at each collision point. The solution is then constructed by summation of the series. The approach is compared to the exact analytical and to the Monte Carlo nonabsorption weighting method results for two representative slowing down and deep penetration problems. Application of the proposed approach leads to unbiased estimates for limited numbers of particle simulations and is useful in suppressing an effective bias problem observed in some cases of deep penetration particle transport problems
International Nuclear Information System (INIS)
Chen, Zhenping; Song, Jing; Zheng, Huaqing; Wu, Bin; Hu, Liqin
2015-01-01
Highlights: • The subdivision combines both advantages of uniform and non-uniform schemes. • The grid models were proved to be more efficient than traditional CSG models. • Monte Carlo simulation performance was enhanced by Optimal Spatial Subdivision. • Efficiency gains were obtained for realistic whole reactor core models. - Abstract: Geometry navigation is one of the key aspects of dominating Monte Carlo particle transport simulation performance for large-scale whole reactor models. In such cases, spatial subdivision is an easily-established and high-potential method to improve the run-time performance. In this study, a dedicated method, named Optimal Spatial Subdivision, is proposed for generating numerically optimal spatial grid models, which are demonstrated to be more efficient for geometry navigation than traditional Constructive Solid Geometry (CSG) models. The method uses a recursive subdivision algorithm to subdivide a CSG model into non-overlapping grids, which are labeled as totally or partially occupied, or not occupied at all, by CSG objects. The most important point is that, at each stage of subdivision, a conception of quality factor based on a cost estimation function is derived to evaluate the qualities of the subdivision schemes. Only the scheme with optimal quality factor will be chosen as the final subdivision strategy for generating the grid model. Eventually, the model built with the optimal quality factor will be efficient for Monte Carlo particle transport simulation. The method has been implemented and integrated into the Super Monte Carlo program SuperMC developed by FDS Team. Testing cases were used to highlight the performance gains that could be achieved. Results showed that Monte Carlo simulation runtime could be reduced significantly when using the new method, even as cases reached whole reactor core model sizes
Creating and using a type of free-form geometry in Monte Carlo particle transport
International Nuclear Information System (INIS)
Wessol, D.E.; Wheeler, F.J.
1993-01-01
While the reactor physicists were fine-tuning the Monte Carlo paradigm for particle transport in regular geometries, the computer scientists were developing rendering algorithms to display extremely realistic renditions of irregular objects ranging from the ubiquitous teakettle to dynamic Jell-O. Even though the modeling methods share a common basis, the initial strategies each discipline developed for variance reduction were remarkably different. Initially, the reactor physicist used Russian roulette, importance sampling, particle splitting, and rejection techniques. In the early stages of development, the computer scientist relied primarily on rejection techniques, including a very elegant hierarchical construction and sampling method. This sampling method allowed the computer scientist to viably track particles through irregular geometries in three-dimensional space, while the initial methods developed by the reactor physicists would only allow for efficient searches through analytical surfaces or objects. As time goes by, it appears there has been some merging of the variance reduction strategies between the two disciplines. This is an early (possibly first) incorporation of geometric hierarchical construction and sampling into the reactor physicists' Monte Carlo transport model that permits efficient tracking through nonuniform rational B-spline surfaces in three-dimensional space. After some discussion, the results from this model are compared with experiments and the model employing implicit (analytical) geometric representation
AlfaMC: A fast alpha particle transport Monte Carlo code
Energy Technology Data Exchange (ETDEWEB)
Peralta, Luis, E-mail: luis@lip.pt [Faculdade de Ciências da Universidade de Lisboa (Portugal); Laboratório de Instrumentação e Física Experimental de Partículas (Portugal); Louro, Alina [Laboratório de Instrumentação e Física Experimental de Partículas (Portugal)
2014-02-11
AlfaMC is a Monte Carlo simulation code for the transport of alpha particles. This code is based on the Continuous Slowing Down Approximation and uses the NIST/ASTAR stopping-power database. The code uses a powerful geometrical package, which allows coding of complex geometries. A flexible histogramming package is used as well, which greatly eases the scoring of results. The code is tailored for microdosimetric applications in which speed is a key factor. Comparison with the SRIM code is made for deposited energy in thin layers and range for air, mylar, aluminum and gold. The general agreement between the two codes is good for beam energies between 1 and 12 MeV. -- Highlights: • AlfaMC is a Monte Carlo program for fast alpha particle transport in matter. • The model is accurate within a few percent in the energy range of 1–12 MeV. • AlfaMC uses a combinatorial geometry package allowing the modeling of complex bodies.
Modelling of a general purpose irradiation chamber using a Monte Carlo particle transport code
International Nuclear Information System (INIS)
Dhiyauddin Ahmad Fauzi; Sheik, F.O.A.; Nurul Fadzlin Hasbullah
2013-01-01
Full-text: The aim of this research is to stimulate the effectiveness use of a general purpose irradiation chamber to contain pure neutron particles obtained from a research reactor. The secondary neutron and gamma particles dose discharge from the chamber layers will be used as a platform to estimate the safe dimension of the chamber. The chamber, made up of layers of lead (Pb), shielding, polyethylene (PE), moderator and commercial grade aluminium (Al) cladding is proposed for the use of interacting samples with pure neutron particles in a nuclear reactor environment. The estimation was accomplished through simulation based on general Monte Carlo N-Particle transport code using Los Alamos MCNPX software. Simulations were performed on the model of the chamber subjected to high neutron flux radiation and its gamma radiation product. The model of neutron particle used is based on the neutron source found in PUSPATI TRIGA MARK II research reactor which holds a maximum flux value of 1 x 10 12 neutron/ cm 2 s. The expected outcomes of this research are zero gamma dose in the core of the chamber and neutron dose rate of less than 10 μSv/ day discharge from the chamber system. (author)
Chi, Yujie; Tian, Zhen; Jia, Xun
2016-08-07
Monte Carlo (MC) particle transport simulation on a graphics-processing unit (GPU) platform has been extensively studied recently due to the efficiency advantage achieved via massive parallelization. Almost all of the existing GPU-based MC packages were developed for voxelized geometry. This limited application scope of these packages. The purpose of this paper is to develop a module to model parametric geometry and integrate it in GPU-based MC simulations. In our module, each continuous region was defined by its bounding surfaces that were parameterized by quadratic functions. Particle navigation functions in this geometry were developed. The module was incorporated to two previously developed GPU-based MC packages and was tested in two example problems: (1) low energy photon transport simulation in a brachytherapy case with a shielded cylinder applicator and (2) MeV coupled photon/electron transport simulation in a phantom containing several inserts of different shapes. In both cases, the calculated dose distributions agreed well with those calculated in the corresponding voxelized geometry. The averaged dose differences were 1.03% and 0.29%, respectively. We also used the developed package to perform simulations of a Varian VS 2000 brachytherapy source and generated a phase-space file. The computation time under the parameterized geometry depended on the memory location storing the geometry data. When the data was stored in GPU's shared memory, the highest computational speed was achieved. Incorporation of parameterized geometry yielded a computation time that was ~3 times of that in the corresponding voxelized geometry. We also developed a strategy to use an auxiliary index array to reduce frequency of geometry calculations and hence improve efficiency. With this strategy, the computational time ranged in 1.75-2.03 times of the voxelized geometry for coupled photon/electron transport depending on the voxel dimension of the auxiliary index array, and in 0
International Nuclear Information System (INIS)
Kling, A.; Barao, F.J.C.; Nakagawa, M.; Tavora, L.
2001-01-01
The following topics were dealt with: Electron and photon interactions and transport mechanisms, random number generation, applications in medical physisc, microdosimetry, track structure, radiobiological modeling, Monte Carlo method in radiotherapy, dosimetry, and medical accelerator simulation, neutron transport, high-energy hadron transport. (HSI)
International Nuclear Information System (INIS)
Choi, Sung Hoon; Kwark, Min Su; Shim, Hyung Jin
2012-01-01
As The Monte Carlo (MC) particle transport analysis for a complex system such as research reactor, accelerator, and fusion facility may require accurate modeling of the complicated geometry. Its manual modeling by using the text interface of a MC code to define the geometrical objects is tedious, lengthy and error-prone. This problem can be overcome by taking advantage of modeling capability of the computer aided design (CAD) system. There have been two kinds of approaches to develop MC code systems utilizing the CAD data: the external format conversion and the CAD kernel imbedded MC simulation. The first approach includes several interfacing programs such as McCAD, MCAM, GEOMIT etc. which were developed to automatically convert the CAD data into the MCNP geometry input data. This approach makes the most of the existing MC codes without any modifications, but implies latent data inconsistency due to the difference of the geometry modeling system. In the second approach, a MC code utilizes the CAD data for the direct particle tracking or the conversion to an internal data structure of the constructive solid geometry (CSG) and/or boundary representation (B-rep) modeling with help of a CAD kernel. MCNP-BRL and OiNC have demonstrated their capabilities of the CAD-based MC simulations. Recently we have developed a CAD-based geometry processing module for the MC particle simulation by using the OpenCASCADE (OCC) library. In the developed module, CAD data can be used for the particle tracking through primitive CAD surfaces (hereafter the CAD-based tracking) or the internal conversion to the CSG data structure. In this paper, the performances of the text-based model, the CAD-based tracking, and the internal CSG conversion are compared by using an in-house MC code, McSIM, equipped with the developed CAD-based geometry processing module
Tripoli-3: monte Carlo transport code for neutral particles - version 3.5 - users manual
International Nuclear Information System (INIS)
Vergnaud, Th.; Nimal, J.C.; Chiron, M.
2001-01-01
The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)
Monte-Carlo treatment of nonlinear collisional effects in charged-particle transport
International Nuclear Information System (INIS)
Weiss, D.L.; Witte, K.H.; Sheppard, M.G.; Oliphant, T.A.
1985-01-01
The effects of two-body coulomb collisions of the simulation particles against a background material are often treated by a Monte-Carlo collisional process in which the collision probability is determined by a Fokker-Planck treatment. This procedure is nonlinear if the properties of the background material are allowed to change as a result of the scattering of the simulation particles. A more completely, nonlinear problem is obtained if the simulation particles themselves form all or part of the background distribution. A new method is presented here for doing this, and examples will be discussed that illustrate the power of the technique
Energy Technology Data Exchange (ETDEWEB)
Walsh, J. A. [Department of Nuclear Science and Engineering, Massachusetts Institute of Technology, NW12-312 Albany, St. Cambridge, MA 02139 (United States); Palmer, T. S. [Department of Nuclear Engineering and Radiation Health Physics, Oregon State University, 116 Radiation Center, Corvallis, OR 97331 (United States); Urbatsch, T. J. [XTD-5: Air Force Systems, Los Alamos National Laboratory, Los Alamos, NM 87545 (United States)
2013-07-01
A new method for generating discrete scattering cross sections to be used in charged particle transport calculations is investigated. The method of data generation is presented and compared to current methods for obtaining discrete cross sections. The new, more generalized approach allows greater flexibility in choosing a cross section model from which to derive discrete values. Cross section data generated with the new method is verified through a comparison with discrete data obtained with an existing method. Additionally, a charged particle transport capability is demonstrated in the time-dependent Implicit Monte Carlo radiative transfer code package, Milagro. The implementation of this capability is verified using test problems with analytic solutions as well as a comparison of electron dose-depth profiles calculated with Milagro and an already-established electron transport code. An initial investigation of a preliminary integration of the discrete cross section generation method with the new charged particle transport capability in Milagro is also presented. (authors)
Querlioz, Damien
2013-01-01
This book gives an overview of the quantum transport approaches for nanodevices and focuses on the Wigner formalism. It details the implementation of a particle-based Monte Carlo solution of the Wigner transport equation and how the technique is applied to typical devices exhibiting quantum phenomena, such as the resonant tunnelling diode, the ultra-short silicon MOSFET and the carbon nanotube transistor. In the final part, decoherence theory is used to explain the emergence of the semi-classical transport in nanodevices.
International Nuclear Information System (INIS)
Brenner, D.J.; Prael, R.E.; Little, R.C.
1987-01-01
Realistic simulations of the passage of fast neutrons through tissue require a large quantity of cross-sectional data. What are needed are differential (in particle type, energy and angle) cross sections. A computer code is described which produces such spectra for neutrons above ∼14 MeV incident on light nuclei such as carbon and oxygen. Comparisons have been made with experimental measurements of double-differential secondary charged-particle production on carbon and oxygen at energies from 27 to 60 MeV; they indicate that the model is adequate in this energy range. In order to utilize fully the results of these calculations, they should be incorporated into a neutron transport code. This requires defining a generalized format for describing charged-particle production, putting the calculated results in this format, interfacing the neutron transport code with these data, and charged-particle transport. The design and development of such a program is described. 13 refs., 3 figs
International Nuclear Information System (INIS)
Procassini, R J; Beck, B R
2004-01-01
It might be assumed that use of a ''high-quality'' random number generator (RNG), producing a sequence of ''pseudo random'' numbers with a ''long'' repetition period, is crucial for producing unbiased results in Monte Carlo particle transport simulations. While several theoretical and empirical tests have been devised to check the quality (randomness and period) of an RNG, for many applications it is not clear what level of RNG quality is required to produce unbiased results. This paper explores the issue of RNG quality in the context of parallel, Monte Carlo transport simulations in order to determine how ''good'' is ''good enough''. This study employs the MERCURY Monte Carlo code, which incorporates the CNPRNG library for the generation of pseudo-random numbers via linear congruential generator (LCG) algorithms. The paper outlines the usage of random numbers during parallel MERCURY simulations, and then describes the source and criticality transport simulations which comprise the empirical basis of this study. A series of calculations for each test problem in which the quality of the RNG (period of the LCG) is varied provides the empirical basis for determining the minimum repetition period which may be employed without producing a bias in the mean integrated results
Estimation of coincidence and correlation in non-analogous Monte Carlo particle transport - 159
International Nuclear Information System (INIS)
Szieberth, M.; Leen Kloosterman, J.
2010-01-01
The conventional non-analogous Monte Carlo methods are optimized to preserve the mean value of the distributions and therefore they are not suited for non-Boltzmann problems like the estimation of coincidences or correlations. This paper presents a general method called history splitting for the non-analogous estimation of such quantities. The basic principle of the method is that a non-analogous particle history can be interpreted as a collection of analogous histories with different weights according to the probability of their realization. Calculations with a simple Monte Carlo program for a pulse-height-type estimator prove that the method is feasible and provides unbiased estimation. Different variance reduction techniques have been tried with the method and Russian roulette turned out to be ineffective in high multiplicity systems. An alternative history control method is applied instead. Simulation results of a Feynman-α measurement shows that even the reconstruction of the higher moments is possible with the history splitting method, which makes the simulation of neutron noise measurements feasible. (authors)
Simulation of neutron transport process, photons and charged particles within the Monte Carlo method
International Nuclear Information System (INIS)
Androsenko, A.A.; Androsenko, P.A.; Artamonov, S.N.; Bolonkina, G.V.; Lomtev, V.L.; Pupko, S.V.
1991-01-01
Description is given to the program system BRAND designed for the accurate solution of non-stationary transport equation of neutrons, photons and charged particles in the conditions of real three-dimensional geometry. An extensive set of local and non-local estimates provides an opportunity of calculating a great set of linear functionals normally being of interest in the calculation of reactors, radiation protection and experiment simulation. The process of particle interaction with substance is simulated on the basis of individual non-group data on each isotope of the composition. 24 refs
SHIELD-HIT12A - a Monte Carlo particle transport program for ion therapy research
DEFF Research Database (Denmark)
Bassler, Niels; Hansen, David Christoffer; Lühr, Armin
2014-01-01
. We experienced that new users quickly learn to use SHIELD-HIT12A and setup new geometries. Contrary to previous versions of SHIELD-HIT, the 12A distribution comes along with easy-to-use example files and an English manual. A new implementation of Vavilov straggling resulted in a massive reduction...... of computation time. Scheduled for later release are CT import and photon-electron transport. Conclusions: SHIELD-HIT12A is an interesting alternative ion transport engine. Apart from being a flexible particle therapy research tool, it can also serve as a back end for a MC ion treatment planning system. More...
Maria Jose, Gonzalez Torres; Jürgen, Henniger
2018-01-01
In order to expand the Monte Carlo transport program AMOS to particle therapy applications, the ion module is being developed in the radiation physics group (ASP) at the TU Dresden. This module simulates the three main interactions of ions in matter for the therapy energy range: elastic scattering, inelastic collisions and nuclear reactions. The simulation of the elastic scattering is based on the Binary Collision Approximation and the inelastic collisions on the Bethe-Bloch theory. The nuclear reactions, which are the focus of the module, are implemented according to a probabilistic-based model developed in the group. The developed model uses probability density functions to sample the occurrence of a nuclear reaction given the initial energy of the projectile particle as well as the energy at which this reaction will take place. The particle is transported until the reaction energy is reached and then the nuclear reaction is simulated. This approach allows a fast evaluation of the nuclear reactions. The theory and application of the proposed model will be addressed in this presentation. The results of the simulation of a proton beam colliding with tissue will also be presented. Copyright © 2017.
Energy Technology Data Exchange (ETDEWEB)
Hunt, J.G. [Institute of Radiation Protection and Dosimetry, Av. Salvador Allende s/n, Recreio, Rio de Janeiro, CEP 22780-160 (Brazil); Watchman, C.J. [Department of Radiation Oncology, University of Arizona, Tucson, AZ, 85721 (United States); Bolch, W.E. [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL, 32611 (United States); Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611 (United States)
2007-07-01
Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D micro-CT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo-VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques. (authors)
International Nuclear Information System (INIS)
Grieshemer, D.P.; Gill, D.F.; Nease, B.R.; Carpenter, D.C.; Joo, H.; Millman, D.L.; Sutton, T.M.; Stedry, M.H.; Dobreff, P.S.; Trumbull, T.H.; Caro, E.
2013-01-01
MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10 -5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each
Cooper, M A
2000-01-01
We present various approximations for the angular distribution of particles emerging from an optically thick, purely isotropically scattering region into a vacuum. Our motivation is to use such a distribution for the Fleck-Canfield random walk method [1] for implicit Monte Carlo (IMC) [2] radiation transport problems. We demonstrate that the cosine distribution recommended in the original random walk paper [1] is a poor approximation to the angular distribution predicted by transport theory. Then we examine other approximations that more closely match the transport angular distribution.
International Nuclear Information System (INIS)
Thiagu Supramaniam
2007-01-01
The aim of this research was to propose a new neutron collimator design for thermal neutron radiography facility using tangential beam port of PUSPATI TRIGA Mark II reactor, Malaysia Institute of Nuclear Technology Research (MINT). Best geometry and materials for neutron collimator were chosen in order to obtain a uniform beam with maximum thermal neutron flux, high L/ D ratio, high neutron to gamma ratio and low beam divergence with high resolution. Monte Carlo N-particle Transport Code version 5 (MCNP 5) was used to optimize six neutron collimator components such as beam port medium, neutron scatterer, neutron moderator, gamma filter, aperture and collimator wall. The reactor and tangential beam port setup in MCNP5 was plotted according to its actual sizes. A homogeneous reactor core was assumed and population control method of variance reduction technique was applied by using cell importance. The comparison between experimental results and simulated results of the thermal neutron flux measurement of the bare tangential beam port, shows that both graph obtained had similar pattern. This directly suggests the reliability of MCNP5 in order to obtained optimal neutron collimator parameters. The simulated results of the optimal neutron medium, shows that vacuum was the best medium to transport neutrons followed by helium gas and air. The optimized aperture component was boral with 3 cm thickness. The optimal aperture center hole diameter was 2 cm which produces 88 L/ D ratio. Simulation also shows that graphite neutron scatterer improves thermal neutron flux while reducing fast neutron flux. Neutron moderator was used to moderate fast and epithermal neutrons in the beam port. Paraffin wax with 90 cm thick was bound to be the best neutron moderator material which produces the highest thermal neutron flux at the image plane. Cylindrical shape high density polyethylene neutron collimator produces the highest thermal neutron flux at the image plane rather than divergent
Simulation of transport equations with Monte Carlo
International Nuclear Information System (INIS)
Matthes, W.
1975-09-01
The main purpose of the report is to explain the relation between the transport equation and the Monte Carlo game used for its solution. The introduction of artificial particles carrying a weight provides one with high flexibility in constructing many different games for the solution of the same equation. This flexibility opens a way to construct a Monte Carlo game for the solution of the adjoint transport equation. Emphasis is laid mostly on giving a clear understanding of what to do and not on the details of how to do a specific game
Monte Carlo electron/photon transport
International Nuclear Information System (INIS)
Mack, J.M.; Morel, J.E.; Hughes, H.G.
1985-01-01
A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs
International Nuclear Information System (INIS)
White, Morgan C.
2000-01-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V and V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to
Energy Technology Data Exchange (ETDEWEB)
White, Morgan C. [Univ. of Florida, Gainesville, FL (United States)
2000-07-01
The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second
Energy Technology Data Exchange (ETDEWEB)
Vergnaud, Th.; Nimal, J.C.; Chiron, M
2001-07-01
The TRIPOLI-3 code applies the Monte Carlo method to neutron, gamma-ray and coupled neutron and gamma-ray transport calculations in three-dimensional geometries, either in steady-state conditions or having a time dependence. It can be used to study problems where there is a high flux attenuation between the source zone and the result zone (studies of shielding configurations or source driven sub-critical systems, with fission being taken into account), as well as problems where there is a low flux attenuation (neutronic calculations -- in a fuel lattice cell, for example -- where fission is taken into account, usually with the calculation on the effective multiplication factor, fine structure studies, numerical experiments to investigate methods approximations, etc). TRIPOLI-3 has been operational since 1995 and is the version of the TRIPOLI code that follows on from TRIPOLI-2; it can be used on SUN, RISC600 and HP workstations and on PC using the Linux or Windows/NT operating systems. The code uses nuclear data libraries generated using the THEMIS/NJOY system. The current libraries were derived from ENDF/B6 and JEF2. There is also a response function library based on a number of evaluations, notably the dosimetry libraries IRDF/85, IRDF/90 and also evaluations from JEF2. The treatment of particle transport is the same in version 3.5 as in version 3.4 of the TRIPOLI code; but the version 3.5 is more convenient for preparing the input data and for reading the output. The french version of the user's manual exists. (authors)
Monte Carlo method in radiation transport problems
International Nuclear Information System (INIS)
Dejonghe, G.; Nimal, J.C.; Vergnaud, T.
1986-11-01
In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media [fr
Parallel implementation of the Monte Carlo transport code EGS4 on the hypercube
International Nuclear Information System (INIS)
Kirk, B.L.; Azmy, Y.Y.; Gabriel, T.A.; Fu, C.Y.
1991-01-01
Monte Carlo transport codes are commonly used in the study of particle interactions. The CALOR89 code system is a combination of several Monte Carlo transport and analysis programs. In order to produce good results, a typical Monte Carlo run will have to produce many particle histories. On a single processor computer, the transport calculation can take a huge amount of time. However, if the transport of particles were divided among several processors in a multiprocessor machine, the time can be drastically reduced
Spot: a new Monte Carlo solver for fast alpha particles
International Nuclear Information System (INIS)
Schneider, M.; Eriksson, L.G.; Basiuk, V.; Imbeaux, F.
2004-01-01
The predictive transport code CRONOS has been augmented by an orbit following Monte Carlo code, SPOT (Simulation of Particle Orbits in a Tokamak). The SPOT code simulates the dynamics of nonthermal particles, and takes into account effects of finite orbit width and collisional transport of fast ions. Recent developments indicate that it might be difficult to avoid, at least transiently, current holes in a reactor. They occur already on existing tokamaks during advanced tokamak scenarios. The SPOT code has been used to study the alpha particle behaviour in the presence of current holes for both JET and ITER relevant parameters. (authors)
Wareing, Todd A.; Failla, Gregory; Horton, John L.; Eifel, Patricia J.; Mourtada, Firas
2009-01-01
A patient dose distribution was calculated by a 3D multi‐group SN particle transport code for intracavitary brachytherapy of the cervix uteri and compared to previously published Monte Carlo results. A Cs‐137 LDR intracavitary brachytherapy CT data set was chosen from our clinical database. MCNPX version 2.5.c, was used to calculate the dose distribution. A 3D multi‐group SN particle transport code, Attila version 6.1.1 was used to simulate the same patient. Each patient applicator was built in SolidWorks, a mechanical design package, and then assembled with a coordinate transformation and rotation for the patient. The SolidWorks exported applicator geometry was imported into Attila for calculation. Dose matrices were overlaid on the patient CT data set. Dose volume histograms and point doses were compared. The MCNPX calculation required 14.8 hours, whereas the Attila calculation required 22.2 minutes on a 1.8 GHz AMD Opteron CPU. Agreement between Attila and MCNPX dose calculations at the ICRU 38 points was within ±3%. Calculated doses to the 2 cc and 5 cc volumes of highest dose differed by not more than ±1.1% between the two codes. Dose and DVH overlays agreed well qualitatively. Attila can calculate dose accurately and efficiently for this Cs‐137 CT‐based patient geometry. Our data showed that a three‐group cross‐section set is adequate for Cs‐137 computations. Future work is aimed at implementing an optimized version of Attila for radiotherapy calculations. PACS number: 87.53.Jw
Patel, Darshana; Bronk, Lawrence; Guan, Fada; Peeler, Christopher R; Brons, Stephan; Dokic, Ivana; Abdollahi, Amir; Rittmüller, Claudia; Jäkel, Oliver; Grosshans, David; Mohan, Radhe; Titt, Uwe
2017-11-01
Accurate modeling of the relative biological effectiveness (RBE) of particle beams requires increased systematic in vitro studies with human cell lines with care towards minimizing uncertainties in biologic assays as well as physical parameters. In this study, we describe a novel high-throughput experimental setup and an optimized parameterization of the Monte Carlo (MC) simulation technique that is universally applicable for accurate determination of RBE of clinical ion beams. Clonogenic cell-survival measurements on a human lung cancer cell line (H460) are presented using proton irradiation. Experiments were performed at the Heidelberg Ion Therapy Center (HIT) with support from the Deutsches Krebsforschungszentrum (DKFZ) in Heidelberg, Germany using a mono-energetic horizontal proton beam. A custom-made variable range selector was designed for the horizontal beam line using the Geant4 MC toolkit. This unique setup enabled a high-throughput clonogenic assay investigation of multiple, well defined dose and linear energy transfer (LETs) per irradiation for human lung cancer cells (H460) cultured in a 96-well plate. Sensitivity studies based on application of different physics lists in conjunction with different electromagnetic constructors and production threshold values to the MC simulations were undertaken for accurate assessment of the calculated dose and the dose-averaged LET (LET d ). These studies were extended to helium and carbon ion beams. Sensitivity analysis of the MC parameterization revealed substantial dependence of the dose and LET d values on both the choice of physics list and the production threshold values. While the dose and LET d calculations using FTFP_BERT_LIV, FTFP_BERT_EMZ, FTFP_BERT_PEN and QGSP_BIC_EMY physics lists agree well with each other for all three ions, they show large differences when compared to the FTFP_BERT physics list with the default electromagnetic constructor. For carbon ions, the dose corresponding to the largest LET d
The MC21 Monte Carlo Transport Code
International Nuclear Information System (INIS)
Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H
2007-01-01
MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities
Parallel processing Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
McKinney, G.W.
1994-01-01
Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine
Parallel MCNP Monte Carlo transport calculations with MPI
International Nuclear Information System (INIS)
Wagner, J.C.; Haghighat, A.
1996-01-01
The steady increase in computational performance has made Monte Carlo calculations for large/complex systems possible. However, in order to make these calculations practical, order of magnitude increases in performance are necessary. The Monte Carlo method is inherently parallel (particles are simulated independently) and thus has the potential for near-linear speedup with respect to the number of processors. Further, the ever-increasing accessibility of parallel computers, such as workstation clusters, facilitates the practical use of parallel Monte Carlo. Recognizing the nature of the Monte Carlo method and the trends in available computing, the code developers at Los Alamos National Laboratory implemented the message-passing general-purpose Monte Carlo radiation transport code MCNP (version 4A). The PVM package was chosen by the MCNP code developers because it supports a variety of communication networks, several UNIX platforms, and heterogeneous computer systems. This PVM version of MCNP has been shown to produce speedups that approach the number of processors and thus, is a very useful tool for transport analysis. Due to software incompatibilities on the local IBM SP2, PVM has not been available, and thus it is not possible to take advantage of this useful tool. Hence, it became necessary to implement an alternative message-passing library package into MCNP. Because the message-passing interface (MPI) is supported on the local system, takes advantage of the high-speed communication switches in the SP2, and is considered to be the emerging standard, it was selected
Ripple enhanced transport of suprathermal alpha particles
International Nuclear Information System (INIS)
Tani, K.; Takizuka, T.; Azumi, M.
1986-01-01
The ripple enhanced transport of suprathermal alpha particles has been studied by the newly developed Monte-Carlo code in which the motion of banana orbit in a toroidal field ripple is described by a mapping method. The existence of ripple-resonance diffusion has been confirmed numerically. We have developed another new code in which the radial displacement of banana orbit is given by the diffusion coefficients from the mapping code or the orbit following Monte-Carlo code. The ripple loss of α particles during slowing down has been estimated by the mapping model code as well as the diffusion model code. From the comparison of the results with those from the orbit-following Monte-Carlo code, it has been found that all of them agree very well. (author)
Monte Carlo methods in electron transport problems. Pt. 1
International Nuclear Information System (INIS)
Cleri, F.
1989-01-01
The condensed-history Monte Carlo method for charged particles transport is reviewed and discussed starting from a general form of the Boltzmann equation (Part I). The physics of the electronic interactions, together with some pedagogic example will be introduced in the part II. The lecture is directed to potential users of the method, for which it can be a useful introduction to the subject matter, and wants to establish the basis of the work on the computer code RECORD, which is at present in a developing stage
Monte Carlo radiation transport: A revolution in science
International Nuclear Information System (INIS)
Hendricks, J.
1993-01-01
When Enrico Fermi, Stan Ulam, Nicholas Metropolis, John von Neuman, and Robert Richtmyer invented the Monte Carlo method fifty years ago, little could they imagine the far-flung consequences, the international applications, and the revolution in science epitomized by their abstract mathematical method. The Monte Carlo method is used in a wide variety of fields to solve exact computational models approximately by statistical sampling. It is an alternative to traditional physics modeling methods which solve approximate computational models exactly by deterministic methods. Modern computers and improved methods, such as variance reduction, have enhanced the method to the point of enabling a true predictive capability in areas such as radiation or particle transport. This predictive capability has contributed to a radical change in the way science is done: design and understanding come from computations built upon experiments rather than being limited to experiments, and the computer codes doing the computations have become the repository for physics knowledge. The MCNP Monte Carlo computer code effort at Los Alamos is an example of this revolution. Physicians unfamiliar with physics details can design cancer treatments using physics buried in the MCNP computer code. Hazardous environments and hypothetical accidents can be explored. Many other fields, from underground oil well exploration to aerospace, from physics research to energy production, from safety to bulk materials processing, benefit from MCNP, the Monte Carlo method, and the revolution in science
Morse Monte Carlo Radiation Transport Code System
Energy Technology Data Exchange (ETDEWEB)
Emmett, M.B.
1975-02-01
The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)
Energy Technology Data Exchange (ETDEWEB)
Iwamoto, Yosuke, E-mail: iwamoto.yosuke@jaea.go.jp; Ogawa, Tatsuhiko
2017-04-01
Because primary knock-on atoms (PKAs) create point defects and clusters in materials that are irradiated with neutrons, it is important to validate the calculations of recoil cross section spectra that are used to estimate radiation damage in materials. Here, the recoil cross section spectra of fission- and fusion-relevant materials were calculated using the Event Generator Mode (EGM) of the Particle and Heavy Ion Transport code System (PHITS) and also using the data processing code NJOY2012 with the nuclear data libraries TENDL2015, ENDF/BVII.1, and JEFF3.2. The heating number, which is the integral of the recoil cross section spectra, was also calculated using PHITS-EGM and compared with data extracted from the ACE files of TENDL2015, ENDF/BVII.1, and JENDL4.0. In general, only a small difference was found between the PKA spectra of PHITS + TENDL2015 and NJOY + TENDL2015. From analyzing the recoil cross section spectra extracted from the nuclear data libraries using NJOY2012, we found that the recoil cross section spectra were incorrect for {sup 72}Ge, {sup 75}As, {sup 89}Y, and {sup 109}Ag in the ENDF/B-VII.1 library, and for {sup 90}Zr and {sup 55}Mn in the JEFF3.2 library. From analyzing the heating number, we found that the data extracted from the ACE file of TENDL2015 for all nuclides were problematic in the neutron capture region because of incorrect data regarding the emitted gamma energy. However, PHITS + TENDL2015 can calculate PKA spectra and heating numbers correctly.
International Nuclear Information System (INIS)
Iwamoto, Y.; Ogawa, T.
2016-01-01
The modelling of the damage in materials irradiated by neutrons is needed for understanding the mechanism of radiation damage in fission and fusion reactor facilities. The molecular dynamics simulations of damage cascades with full atomic interactions require information about the energy distribution of the Primary Knock on Atoms (PKAs). The most common process to calculate PKA energy spectra under low-energy neutron irradiation is to use the nuclear data processing code NJOY2012. It calculates group-to-group recoil cross section matrices using nuclear data libraries in ENDF data format, which is energy and angular recoil distributions for many reactions. After the NJOY2012 process, SPKA6C is employed to produce PKA energy spectra combining recoil cross section matrices with an incident neutron energy spectrum. However, intercomparison with different processes and nuclear data libraries has not been studied yet. Especially, the higher energy (~5 MeV) of the incident neutrons, compared to fission, leads to many reaction channels, which produces a complex distribution of PKAs in energy and type. Recently, we have developed the event generator mode (EGM) in the Particle and Heavy Ion Transport code System PHITS for neutron incident reactions in the energy region below 20 MeV. The main feature of EGM is to produce PKA with keeping energy and momentum conservation in a reaction. It is used for event-by-event analysis in application fields such as soft error analysis in semiconductors, micro dosimetry in human body, and estimation of Displacement per Atoms (DPA) value in metals and so on. The purpose of this work is to specify differences of PKA spectra and heating number related with kerma between different calculation method using PHITS-EGM and NJOY2012+SPKA6C with different libraries TENDL-2015, ENDF/B-VII.1 and JENDL-4.0 for fusion relevant materials
Exponential convergence on a continuous Monte Carlo transport problem
International Nuclear Information System (INIS)
Booth, T.E.
1997-01-01
For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described
General particle transport equation. Final report
International Nuclear Information System (INIS)
Lafi, A.Y.; Reyes, J.N. Jr.
1994-12-01
The general objectives of this research are as follows: (1) To develop fundamental models for fluid particle coalescence and breakage rates for incorporation into statistically based (Population Balance Approach or Monte Carlo Approach) two-phase thermal hydraulics codes. (2) To develop fundamental models for flow structure transitions based on stability theory and fluid particle interaction rates. This report details the derivation of the mass, momentum and energy conservation equations for a distribution of spherical, chemically non-reacting fluid particles of variable size and velocity. To study the effects of fluid particle interactions on interfacial transfer and flow structure requires detailed particulate flow conservation equations. The equations are derived using a particle continuity equation analogous to Boltzmann's transport equation. When coupled with the appropriate closure equations, the conservation equations can be used to model nonequilibrium, two-phase, dispersed, fluid flow behavior. Unlike the Eulerian volume and time averaged conservation equations, the statistically averaged conservation equations contain additional terms that take into account the change due to fluid particle interfacial acceleration and fluid particle dynamics. Two types of particle dynamics are considered; coalescence and breakage. Therefore, the rate of change due to particle dynamics will consider the gain and loss involved in these processes and implement phenomenological models for fluid particle breakage and coalescence
Advances in Monte Carlo electron transport
International Nuclear Information System (INIS)
Bielajew, Alex F.
1995-01-01
Notwithstanding the success of Monte Carlo (MC) calculations for determining ion chamber correction factors for air-kerma standards and radiotherapy applications, a great challenge remains. MC is unable to calculate ion chamber response to better than 1% for low-Z and 3% for high-Z wall materials. Moreover, the two major MC code systems employed in radiation dosimetry, the EGS and ITS codes, differ in opposite directions from ion chamber experiments. The discrepancy with experiment is due to inadequacies in the underlying e - condensed-history algorithms. As modeled by MC calculations, the e - step-lengths in the chamber walls and the ionisation cavity differ in terms of material traversed by about three orders of magnitude. This demands that the underlying e - transport algorithms be very stable over a great dynamic range. Otherwise a spurious e - disequilibrium may be generated. The multiple-scattering (MS) algorithms, Moliere in the case of EGS and Goudsmit-Saunderson (GS) in the case of ITS, are either mathematically or numerically unstable in the plural-scattering environment of the ionisation cavity. Recently, a new MS theory has been developed that is an exact solution of the Wentzel small-angle formalism using a screened Rutherford cross section. This new MS theory is mathematically, physically and numerically stable from the no-scattering to the MS regimes. This theory is the small-angle equivalent of the GS equation for a Rutherford cross section. Large-angle corrections connecting this theory to GS theory have been derived by Bethe. The Moliere theory is the large-pathlength limit of this theory. The strategy for employing this new theory for ion chamber and radiotherapy calculations is described
A Fano cavity test for Monte Carlo proton transport algorithms
International Nuclear Information System (INIS)
Sterpin, Edmond; Sorriaux, Jefferson; Souris, Kevin; Vynckier, Stefaan; Bouchard, Hugo
2014-01-01
Purpose: In the scope of reference dosimetry of radiotherapy beams, Monte Carlo (MC) simulations are widely used to compute ionization chamber dose response accurately. Uncertainties related to the transport algorithm can be verified performing self-consistency tests, i.e., the so-called “Fano cavity test.” The Fano cavity test is based on the Fano theorem, which states that under charged particle equilibrium conditions, the charged particle fluence is independent of the mass density of the media as long as the cross-sections are uniform. Such tests have not been performed yet for MC codes simulating proton transport. The objectives of this study are to design a new Fano cavity test for proton MC and to implement the methodology in two MC codes: Geant4 and PENELOPE extended to protons (PENH). Methods: The new Fano test is designed to evaluate the accuracy of proton transport. Virtual particles with an energy ofE 0 and a mass macroscopic cross section of (Σ)/(ρ) are transported, having the ability to generate protons with kinetic energy E 0 and to be restored after each interaction, thus providing proton equilibrium. To perform the test, the authors use a simplified simulation model and rigorously demonstrate that the computed cavity dose per incident fluence must equal (ΣE 0 )/(ρ) , as expected in classic Fano tests. The implementation of the test is performed in Geant4 and PENH. The geometry used for testing is a 10 × 10 cm 2 parallel virtual field and a cavity (2 × 2 × 0.2 cm 3 size) in a water phantom with dimensions large enough to ensure proton equilibrium. Results: For conservative user-defined simulation parameters (leading to small step sizes), both Geant4 and PENH pass the Fano cavity test within 0.1%. However, differences of 0.6% and 0.7% were observed for PENH and Geant4, respectively, using larger step sizes. For PENH, the difference is attributed to the random-hinge method that introduces an artificial energy straggling if step size is not
Monte Carlo calculations of elementary particle properties
Guralnik, G. S.; Warnock, T.; Zemach, C.
1984-01-01
The object of this project is to calculate the masses of the elementary particles. This ambitious goal apparently is not possible using analytic methods or known approximation methods. However, it is probable that the power of a modern super computer will make at least part of the low lying mass spectrum accessible through direct numerical computation. Initial attempts by several groups at calculating this spectrum on small lattices of space time points have been very promising. Using new methods and super computers considerable progress has been made towards evaluating the mass spectrum on comparatively large lattices. Only more time and faster machines with increased storage will allow calculations of systems with guaranteed minimal boundary effects. The ideas that currently go into this calculation are outlined.
Towards a Revised Monte Carlo Neutral Particle Surface Interaction Model
International Nuclear Information System (INIS)
Stotler, D.P.
2005-01-01
The components of the neutral- and plasma-surface interaction model used in the Monte Carlo neutral transport code DEGAS 2 are reviewed. The idealized surfaces and processes handled by that model are inadequate for accurately simulating neutral transport behavior in present day and future fusion devices. We identify some of the physical processes missing from the model, such as mixed materials and implanted hydrogen, and make some suggestions for improving the model
Energy Technology Data Exchange (ETDEWEB)
Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Watanabe, Takehito [Los Alamos National Laboratory; Chadwick, Mark [Los Alamos National Laboratory
2010-01-01
Monte Carlo simulations for particle and {gamma}-ray emissions from an excited nucleus based on the Hauser-Feshbach statistical theory are performed to obtain correlated information between emitted particles and {gamma}-rays. We calculate neutron induced reactions on {sup 51}V to demonstrate unique advantages of the Monte Carlo method. which are the correlated {gamma}-rays in the neutron radiative capture reaction, the neutron and {gamma}-ray correlation, and the particle-particle correlations at higher energies. It is shown that properties in nuclear reactions that are difficult to study with a deterministic method can be obtained with the Monte Carlo simulations.
Monte Carlo methods for flux expansion solutions of transport problems
International Nuclear Information System (INIS)
Spanier, J.
1999-01-01
Adaptive Monte Carlo methods, based on the use of either correlated sampling or importance sampling, to obtain global solutions to certain transport problems have recently been described. The resulting learning algorithms are capable of achieving geometric convergence when applied to the estimation of a finite number of coefficients in a flux expansion representation of the global solution. However, because of the nonphysical nature of the random walk simulations needed to perform importance sampling, conventional transport estimators and source sampling techniques require modification to be used successfully in conjunction with such flux expansion methods. It is shown how these problems can be overcome. First, the traditional path length estimators in wide use in particle transport simulations are generalized to include rather general detector functions (which, in this application, are the individual basis functions chosen for the flus expansion). Second, it is shown how to sample from the signed probabilities that arise as source density functions in these applications, without destroying the zero variance property needed to ensure geometric convergence to zero error
Summary of Alpha Particle Transport
Energy Technology Data Exchange (ETDEWEB)
Medley, S.S.; White, R.B.; Zweben, S.J.
1998-08-19
This paper summarizes the talks on alpha particle transport which were presented at the 5th International Atomic Energy Agency's Technical Committee Meeting on "Alpha Particles in Fusion Research" held at the Joint European Torus, England in September 1997.
Speedup of Particle Transport Problems with a Beowulf Cluster
Zhongxiang Zhao; G. I. Maldonado
2006-01-01
The MCNP code is a general Monte Carlo N-Particle Transport program that is widely used in health physics, medical physics and nuclear engineering for problems involving neutron, photon and electron transport[1]. However, due to the stochastic nature of the algorithms employed to solve the Boltzmann transport equation, MCNP generally exhibits a slow rate of convergence. In fact, engineers and scientists can quickly identify intractable versions of their most challenging and CPU-intensive prob...
Monte Carlo transport of electrons and positrons through thin foils
International Nuclear Information System (INIS)
Legarda, F.; Idoeta, R.
2000-01-01
In the different measurements made with electrons traversing matter it becomes useful the knowledge of its transmission through that medium, their paths and their angular distribution through matter so as to process and get information about the traversed medium and to improve and innovate the techniques that employ electrons, as medical applications or materials irradiation. This work presents a simulation of the transport of beams of electrons and positrons through thin foils using an analog Monte Carlo code that simulates in a detailed way every electron movement or interaction in matter. As those particles penetrate thin absorbers it has been assumed that they interact with matter only through elastic scattering, with negligible energy loss. This type of interaction has been described quite precisely because its angular form influences very much the angular distribution of electrons and positrons in matter. With this code it has been calculated the number of particles, with energies between 100 and 3000 keV, that are transmitted through different media of various thicknesses as well as its angular distribution, showing a good agreement with experimental data. The discrepancies are less than 5% for thicknesses lower than about 30% of the corresponding range in the tested material. As elastic scattering is very anisotropic, angular distributions resemble a collimated incident beam for very thin foils becoming slowly more isotropic when absorber thickness is increased. (author)
International Nuclear Information System (INIS)
Randolph Schwarz; Leland L. Carter; Alysia Schwarz
2005-01-01
Monte Carlo N-Particle Transport Code (MCNP) is the code of choice for doing complex neutron/photon/electron transport calculations for the nuclear industry and research institutions. The Visual Editor for Monte Carlo N-Particle is internationally recognized as the best code for visually creating and graphically displaying input files for MCNP. The work performed in this grant was used to enhance the capabilities of the MCNP Visual Editor to allow it to read in both 2D and 3D Computer Aided Design (CAD) files, allowing the user to electronically generate a valid MCNP input geometry
SPHERE: a spherical-geometry multimaterial electron/photon Monte Carlo transport code
International Nuclear Information System (INIS)
Halbleib, J.A. Sr.
1977-06-01
SPHERE provides experimenters and theorists with a method for the routine solution of coupled electron/photon transport through multimaterial configurations possessing spherical symmetry. Emphasis is placed upon operational simplicity without sacrificing the rigor of the model. SPHERE combines condensed-history electron Monte Carlo with conventional single-scattering photon Monte Carlo in order to describe the transport of all generations of particles from several MeV down to 1.0 and 10.0 keV for electrons and photons, respectively. The model is more accurate at the higher energies, with a less rigorous description of the particle cascade at energies where the shell structure of the transport media becomes important. Flexibility of construction permits the user to tailor the model to specific applications and to extend the capabilities of the model to more sophisticated applications through relatively simple update procedures. 8 figs., 3 tables
A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations
International Nuclear Information System (INIS)
Densmore, Jeffery D.; Urbatsch, Todd J.; Evans, Thomas M.; Buksas, Michael W.
2007-01-01
Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the
Review of Monte Carlo methods for particle multiplicity evaluation
International Nuclear Information System (INIS)
Armesto, Nestor
2005-01-01
I present a brief review of the existing models for particle multiplicity evaluation in heavy ion collisions which are at our disposal in the form of Monte Carlo simulators. Models are classified according to the physical mechanisms with which they try to describe the different stages of a high-energy collision between heavy nuclei. A comparison of predictions, as available at the beginning of year 2000, for multiplicities in central AuAu collisions at the BNL Relativistic Heavy Ion Collider (RHIC) and PbPb collisions at the CERN Large Hadron Collider (LHC) is provided
Minimizing the cost of splitting in Monte Carlo radiation transport simulation
Energy Technology Data Exchange (ETDEWEB)
Juzaitis, R.J.
1980-10-01
A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma/sup 2//sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed (e.g., deep penetration calculations).
Minimizing the cost of splitting in Monte Carlo radiation transport simulation
International Nuclear Information System (INIS)
Juzaitis, R.J.
1980-10-01
A deterministic analysis of the computational cost associated with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. Appropriate integro-differential equations are developed for the first and second moments of the Monte Carlo tally as well as time per particle history, given that splitting with Russian roulette takes place at one (or several) internal surfaces of the geometry. The equations are solved using a standard S/sub n/ (discrete ordinates) solution technique, allowing for the prediction of computer cost (formulated as the product of sample variance and time per particle history, sigma 2 /sub s/tau p) associated with a given set of splitting parameters. Optimum splitting surface locations and splitting ratios are determined. Benefits of such an analysis are particularly noteworthy for transport problems in which splitting is apt to be extensively employed
Crevillén-García, D; Power, H
2017-08-01
In this study, we apply four Monte Carlo simulation methods, namely, Monte Carlo, quasi-Monte Carlo, multilevel Monte Carlo and multilevel quasi-Monte Carlo to the problem of uncertainty quantification in the estimation of the average travel time during the transport of particles through random heterogeneous porous media. We apply the four methodologies to a model problem where the only input parameter, the hydraulic conductivity, is modelled as a log-Gaussian random field by using direct Karhunen-Loéve decompositions. The random terms in such expansions represent the coefficients in the equations. Numerical calculations demonstrating the effectiveness of each of the methods are presented. A comparison of the computational cost incurred by each of the methods for three different tolerances is provided. The accuracy of the approaches is quantified via the mean square error.
MC++: A parallel, portable, Monte Carlo neutron transport code in C++
International Nuclear Information System (INIS)
Lee, S.R.; Cummings, J.C.; Nolen, S.D.
1997-01-01
MC++ is an implicit multi-group Monte Carlo neutron transport code written in C++ and based on the Parallel Object-Oriented Methods and Applications (POOMA) class library. MC++ runs in parallel on and is portable to a wide variety of platforms, including MPPs, SMPs, and clusters of UNIX workstations. MC++ is being developed to provide transport capabilities to the Accelerated Strategic Computing Initiative (ASCI). It is also intended to form the basis of the first transport physics framework (TPF), which is a C++ class library containing appropriate abstractions, objects, and methods for the particle transport problem. The transport problem is briefly described, as well as the current status and algorithms in MC++ for solving the transport equation. The alpha version of the POOMA class library is also discussed, along with the implementation of the transport solution algorithms using POOMA. Finally, a simple test problem is defined and performance and physics results from this problem are discussed on a variety of platforms
Importance estimation in Monte Carlo modelling of neutron and photon transport
International Nuclear Information System (INIS)
Mickael, M.W.
1992-01-01
The estimation of neutron and photon importance in a three-dimensional geometry is achieved using a coupled Monte Carlo and diffusion theory calculation. The parameters required for the solution of the multigroup adjoint diffusion equation are estimated from an analog Monte Carlo simulation of the system under investigation. The solution of the adjoint diffusion equation is then used as an estimate of the particle importance in the actual simulation. This approach provides an automated and efficient variance reduction method for Monte Carlo simulations. The technique has been successfully applied to Monte Carlo simulation of neutron and coupled neutron-photon transport in the nuclear well-logging field. The results show that the importance maps obtained in a few minutes of computer time using this technique are in good agreement with Monte Carlo generated importance maps that require prohibitive computing times. The application of this method to Monte Carlo modelling of the response of neutron porosity and pulsed neutron instruments has resulted in major reductions in computation time. (Author)
Energy Technology Data Exchange (ETDEWEB)
Both, J.P.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B
2003-07-01
This manual relates to Version 4.3 TRIPOLI-4 code. TRIPOLI-4 is a computer code simulating the transport of neutrons, photons, electrons and positrons. It can be used for radiation shielding calculations (long-distance propagation with flux attenuation in non-multiplying media) and neutronic calculations (fissile medium, criticality or sub-criticality basis). This makes it possible to calculate k{sub eff} (for criticality), flux, currents, reaction rates and multi-group cross-sections. TRIPOLI-4 is a three-dimensional code that uses the Monte-Carlo method. It allows for point-wise description in terms of energy of cross-sections and multi-group homogenized cross-sections and features two modes of geometrical representation: surface and combinatorial. The code uses cross-section libraries in ENDF/B format (such as JEF2-2, ENDF/B-VI and JENDL) for point-wise description cross-sections in APOTRIM format (from the APOLLO2 code) or a format specific to TRIPOLI-4 for multi-group description. (authors)
Monte Carlo Simulation for particle behavior of recycling neutrals in a tokamak divertor region
International Nuclear Information System (INIS)
Kim, D. K.; Hong, S. H.
1997-01-01
The steady-state behavior of recycling neutral atoms in a tokamak edge region has been analyzed through a two-dimensional Monte Carlo simulation. A particle tracking algorithm used in earlier research on the neutral particle transport is applied to this Monte Carlo simulation in order to perform more accurate calculations with the EDGETRAN code which was previously developed for a two-dimensional edge plasma transport in the authors' laboratory. The physical model of neutral recycling includes charge-exchange and ionization interactions between plasmas and neutral atoms. The reflection processes of incident particles on the device wall are described by empirical formulas. Calculations for density, energy, and velocity distributions of neutral deuterium-tritium atoms have been carried out for a medium-sized tokamak with a double-null configuration based on the KT-2 conceptual design. The input plasma parameters such as plasma density, ion and electron temperatures, and ion fluid velocity are provided from the EDGETRAN calculations. As a result of the present numerical analysis, it is noticed that a significant drop of the neutral atom density appears in the region of high plasma density and that the similar distribution of neutral energy to that of plasma ions is present as frequently reported in other studies. Relations between edge plasma conditions and the neutral recycling behavior are discussed from the numerical results obtained herein. (author)
Monte-Carlo simulation and microdosimetry analysis of an α-particle source for cell irradiation
International Nuclear Information System (INIS)
Belchior, A.; Teles, P.; Vaz, P.; Peralta, L.; Almeida, P.
2010-01-01
The application of Monte Carlo methods to microdosimetry is an open issue. We used the MCNPX Monte Carlo code for the assessment of several physical parameters of relevance in microdosimetry. These parameters, such as dose distribution and linear energy transfer are evaluated through the irradiation of a cell-monolayer. In this work, we report on the computational results obtained for energy and linear energy and transfer (LET) spectra in a monolayer. These results were obtained using MCNPX and compared to the results obtained with the Stopping and Range of Ions in Matter (SRIM), a computational tool that solves the transport equation of alpha particles using analytical methods. The simulation results were compared to experimental data. In order to do this, we used an experimental setup consisting of an α-particle irradiator using a 210 Po radioactive source was calibrated using a barrier surface detector of Si(Li) under specific conditions, for cell irradiation. A Monte-Carlo model of the experimental setup was implemented using MCNPX. In order to perform a detailed and realistic simulation, all the experimental conditions were taken into account. The main challenges of this simulation arise from the geometry of the experimental setup which involves different layers of materials with micrometric thickness, imposing stringent requirements on the tracking of the α-particles at the micrometer level. Also, the use of biological material means that many additional parameters, such as tissue non-homogeneity, must be taken into account. Monte-Carlo results are in good agreement with experimental data. Sources of discrepancy between the computational results and measurements are analyzed. (author)
The use of Monte Carlo radiation transport codes in radiation physics and dosimetry
CERN. Geneva; Ferrari, Alfredo; Silari, Marco
2006-01-01
Transport and interaction of electromagnetic radiation Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. In these codes, photon transport is simulated by using the detailed scheme, i.e., interaction by interaction. Detailed simulation is easy to implement, and the reliability of the results is only limited by the accuracy of the adopted cross sections. Simulations of electron and positron transport are more difficult, because these particles undergo a large number of interactions in the course of their slowing down. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interacti...
Novotny, M.A.
2010-02-01
The efficiency of dynamic Monte Carlo algorithms for off-lattice systems composed of particles is studied for the case of a single impurity particle. The theoretical efficiencies of the rejection-free method and of the Monte Carlo with Absorbing Markov Chains method are given. Simulation results are presented to confirm the theoretical efficiencies. © 2010.
Bouchard, Hugo; Bielajew, Alex
2015-07-07
To establish a theoretical framework for generalizing Monte Carlo transport algorithms by adding external electromagnetic fields to the Boltzmann radiation transport equation in a rigorous and consistent fashion. Using first principles, the Boltzmann radiation transport equation is modified by adding a term describing the variation of the particle distribution due to the Lorentz force. The implications of this new equation are evaluated by investigating the validity of Fano's theorem. Additionally, Lewis' approach to multiple scattering theory in infinite homogeneous media is redefined to account for the presence of external electromagnetic fields. The equation is modified and yields a description consistent with the deterministic laws of motion as well as probabilistic methods of solution. The time-independent Boltzmann radiation transport equation is generalized to account for the electromagnetic forces in an additional operator similar to the interaction term. Fano's and Lewis' approaches are stated in this new equation. Fano's theorem is found not to apply in the presence of electromagnetic fields. Lewis' theory for electron multiple scattering and moments, accounting for the coupling between the Lorentz force and multiple elastic scattering, is found. However, further investigation is required to develop useful algorithms for Monte Carlo and deterministic transport methods. To test the accuracy of Monte Carlo transport algorithms in the presence of electromagnetic fields, the Fano cavity test, as currently defined, cannot be applied. Therefore, new tests must be designed for this specific application. A multiple scattering theory that accurately couples the Lorentz force with elastic scattering could improve Monte Carlo efficiency. The present study proposes a new theoretical framework to develop such algorithms.
Parallelism in continuous energy Monte Carlo method for neutron transport
Energy Technology Data Exchange (ETDEWEB)
Uenohara, Yuji (Nuclear Engineering Lab., Toshiba Corp. (Japan))
1993-04-01
The continuous energy Monte Carlo code VIM was implemented on a prototype highly parallel computer called PRODIGY developed by TOSHIBA Corporation. The author tried to distribute nuclear data to the processing elements (PEs) for the purpose of studying domain decompositon for the velocity space. Eigenvalue problems for a 1-D plate-cell infinite lattice mockup of ZPR-6-7 wa examined. For the geometrical space, the PEs were assigned to domains corresponding to nuclear fuel bundles in a typical boiling water reactor. The author estimated the parallelization efficiencies for both highly parallel and a massively parallel computer. Negligible communication overhead derived from neutron transports resulted from the heavy computing loads of Monte Carlo simulations. In the case of highly parallel computers, the communication overheads scarcely contributed to the parallelization efficiency. In the case of massively parallel computers, the control of PEs resulted in considerable communication overheads. (orig.)
Parallelism in continuous energy Monte Carlo method for neutron transport
International Nuclear Information System (INIS)
Uenohara, Yuji
1993-01-01
The continuous energy Monte Carlo code VIM was implemented on a prototype highly parallel computer called PRODIGY developed by TOSHIBA Corporation. The author tried to distribute nuclear data to the processing elements (PEs) for the purpose of studying domain decompositon for the velocity space. Eigenvalue problems for a 1-D plate-cell infinite lattice mockup of ZPR-6-7 wa examined. For the geometrical space, the PEs were assigned to domains corresponding to nuclear fuel bundles in a typical boiling water reactor. The author estimated the parallelization efficiencies for both highly parallel and a massively parallel computer. Negligible communication overhead derived from neutron transports resulted from the heavy computing loads of Monte Carlo simulations. In the case of highly parallel computers, the communication overheads scarcely contributed to the parallelization efficiency. In the case of massively parallel computers, the control of PEs resulted in considerable communication overheads. (orig.)
MCNP: a general Monte Carlo code for neutron and photon transport
Energy Technology Data Exchange (ETDEWEB)
Forster, R.A.; Godfrey, T.N.K.
1985-01-01
MCNP is a very general Monte Carlo neutron photon transport code system with approximately 250 person years of Group X-6 code development invested. It is extremely portable, user-oriented, and a true production code as it is used about 60 Cray hours per month by about 150 Los Alamos users. It has as its data base the best cross-section evaluations available. MCNP contains state-of-the-art traditional and adaptive Monte Carlo techniques to be applied to the solution of an ever-increasing number of problems. Excellent user-oriented documentation is available for all facets of the MCNP code system. Many useful and important variants of MCNP exist for special applications. The Radiation Shielding Information Center (RSIC) in Oak Ridge, Tennessee is the contact point for worldwide MCNP code and documentation distribution. A much improved MCNP Version 3A will be available in the fall of 1985, along with new and improved documentation. Future directions in MCNP development will change the meaning of MCNP to Monte Carlo N Particle where N particle varieties will be transported.
Condensed history Monte Carlo methods for photon transport problems
International Nuclear Information System (INIS)
Bhan, Katherine; Spanier, Jerome
2007-01-01
We study methods for accelerating Monte Carlo simulations that retain most of the accuracy of conventional Monte Carlo algorithms. These methods - called Condensed History (CH) methods - have been very successfully used to model the transport of ionizing radiation in turbid systems. Our primary objective is to determine whether or not such methods might apply equally well to the transport of photons in biological tissue. In an attempt to unify the derivations, we invoke results obtained first by Lewis, Goudsmit and Saunderson and later improved by Larsen and Tolar. We outline how two of the most promising of the CH models - one based on satisfying certain similarity relations and the second making use of a scattering phase function that permits only discrete directional changes - can be developed using these approaches. The main idea is to exploit the connection between the space-angle moments of the radiance and the angular moments of the scattering phase function. We compare the results obtained when the two CH models studied are used to simulate an idealized tissue transport problem. The numerical results support our findings based on the theoretical derivations and suggest that CH models should play a useful role in modeling light-tissue interactions
A portable, parallel, object-oriented Monte Carlo neutron transport code in C++
International Nuclear Information System (INIS)
Lee, S.R.; Cummings, J.C.; Nolen, S.D.
1997-01-01
We have developed a multi-group Monte Carlo neutron transport code using C++ and the Parallel Object-Oriented Methods and Applications (POOMA) class library. This transport code, called MC++, currently computes k and α-eigenvalues and is portable to and runs parallel on a wide variety of platforms, including MPPs, clustered SMPs, and individual workstations. It contains appropriate classes and abstractions for particle transport and, through the use of POOMA, for portable parallelism. Current capabilities of MC++ are discussed, along with physics and performance results on a variety of hardware, including all Accelerated Strategic Computing Initiative (ASCI) hardware. Current parallel performance indicates the ability to compute α-eigenvalues in seconds to minutes rather than hours to days. Future plans and the implementation of a general transport physics framework are also discussed
Adaptively Learning an Importance Function Using Transport Constrained Monte Carlo
International Nuclear Information System (INIS)
Booth, T.E.
1998-01-01
It is well known that a Monte Carlo estimate can be obtained with zero-variance if an exact importance function for the estimate is known. There are many ways that one might iteratively seek to obtain an ever more exact importance function. This paper describes a method that has obtained ever more exact importance functions that empirically produce an error that is dropping exponentially with computer time. The method described herein constrains the importance function to satisfy the (adjoint) Boltzmann transport equation. This constraint is provided by using the known form of the solution, usually referred to as the Case eigenfunction solution
KAMCCO, a reactor physics Monte Carlo neutron transport code
International Nuclear Information System (INIS)
Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.
1976-06-01
KAMCCO is a 3-dimensional reactor Monte Carlo code for fast neutron physics problems. Two options are available for the solution of 1) the inhomogeneous time-dependent neutron transport equation (census time scheme), and 2) the homogeneous static neutron transport equation (generation cycle scheme). The user defines the desired output, e.g. estimates of reaction rates or neutron flux integrated over specified volumes in phase space and time intervals. Such primary quantities can be arbitrarily combined, also ratios of these quantities can be estimated with their errors. The Monte Carlo techniques are mostly analogue (exceptions: Importance sampling for collision processes, ELP/MELP, Russian roulette and splitting). Estimates are obtained from the collision and track length estimators. Elastic scattering takes into account first order anisotropy in the center of mass system. Inelastic scattering is processed via the evaporation model or via the excitation of discrete levels. For the calculation of cross sections, the energy is treated as a continuous variable. They are computed by a) linear interpolation, b) from optionally Doppler broadened single level Breit-Wigner resonances or c) from probability tables (in the region of statistically distributed resonances). (orig.) [de
Kinetic Monte Carlo simulation of single-electron multiple-trapping transport in disordered media
Javadi, Mohammad; Abdi, Yaser
2017-12-01
The conventional single-particle Monte Carlo simulation of charge transport in disordered media is based on the truncated density of localized states (DOLS) which benefits from very short time execution. Although this model successfully clarifies the properties of electron transport in moderately disordered media, it overestimates the electron diffusion coefficient for strongly disordered media. The origin of this deviation is discussed in terms of zero-temperature approximation in the truncated DOLS and the ignorance of spatial occupation of localized states. Here, based on the multiple-trapping regime we introduce a modified single-particle kinetic Monte Carlo model that can be used to investigate the electron transport in any disordered media independent from the value of disorder parameter. In the proposed model, instead of using a truncated DOLS we imply the raw DOLS. In addition, we have introduced an occupation index for localized states to consider the effect of spatial occupation of trap sites. The proposed model is justified in a simple cubic lattice of trap sites for broad interval of disorder parameters, Fermi levels, and temperatures.
ACCEPT: three-dimensional electron/photon Monte Carlo transport code using combinatorial geometry
Energy Technology Data Exchange (ETDEWEB)
Halbleib, J.A. Sr.
1979-05-01
The ACCEPT code provides experimenters and theorists with a method for the routine solution of coupled electron/photon transport through three-dimensional multimaterial geometries described by the combinational method. Emphasis is placed upon operational simplicity without sacrificing the rigor of the model. ACCEPT combines condensed-history electron Monte Carlo with conventional single-scattering photon Monte Carlo in order to describe the transport of all generations of particles from several MeV down to 1.0 and 10.0 keV for electrons and photons, respectively. The model is more accurate at the higher energies with a less rigorous description of the particle cascade at energies where the shell structure of the transport media becomes important. Flexibility of construction permits the user to tailor the model to specific applications and to extend the capabilities of the model to more sophisticated applications through relatively simple update procedures. The ACCEPT code is currently running on the CDC-7600 (66000) where the bulk of the cross-section data and the statistical variables are stored in Large Core Memory (Extended Core Storage).
ACCEPT: three-dimensional electron/photon Monte Carlo transport code using combinatorial geometry
International Nuclear Information System (INIS)
Halbleib, J.A. Sr.
1979-05-01
The ACCEPT code provides experimenters and theorists with a method for the routine solution of coupled electron/photon transport through three-dimensional multimaterial geometries described by the combinational method. Emphasis is placed upon operational simplicity without sacrificing the rigor of the model. ACCEPT combines condensed-history electron Monte Carlo with conventional single-scattering photon Monte Carlo in order to describe the transport of all generations of particles from several MeV down to 1.0 and 10.0 keV for electrons and photons, respectively. The model is more accurate at the higher energies with a less rigorous description of the particle cascade at energies where the shell structure of the transport media becomes important. Flexibility of construction permits the user to tailor the model to specific applications and to extend the capabilities of the model to more sophisticated applications through relatively simple update procedures. The ACCEPT code is currently running on the CDC-7600 (66000) where the bulk of the cross-section data and the statistical variables are stored in Large Core Memory
Monte-Carlo study of energy deposition by heavy charged particles in sub-cellular volumes
International Nuclear Information System (INIS)
Emfietzoglou, D.; Papamichael, G.; Pathak, A.; Fotopoulos, A.; Nikjoo, H.
2007-01-01
Detailed-history Monte-Carlo code is used to study the energy deposition from proton and alpha particle tracks at the sub-cellular level. Inelastic cross sections for both the vapour and liquid phases of water have been implemented into the code in order to explore the influence of non-linear density effects associated with the condensed-phase cellular environment. Results of energy deposition and its straggling for 0.5 to 5 MeV amu -1 protons and alpha particles traversing or passing near spherical volumes of 2-200 nm in diameter relevant to DNA- and chromosome-size targets are presented. It is shown that the explicit account of δ-ray transport reduces the dose by as much as 10-60%, whereas stochastic fluctuations lead to a relative uncertainty ranging from 20% to more than 100%. Protons and alpha particles of the same velocity exhibit a similar δ-ray effect, whereas the relative uncertainty of the alphas is almost half that of protons. The effect of the phase is noticeable (10-15%) mainly through differences on the transport of δ-rays, which in liquid water have higher penetration distances. It is expected that the implementation of such results into multi-scale biophysical models of radiation effects will lead to a more realistic predictions on the efficacy of new radiotherapeutic modalities that employ either external proton beam irradiation or internal alpha-emitting radionuclides. (authors)
ITS - The integrated TIGER series of coupled electron/photon Monte Carlo transport codes
International Nuclear Information System (INIS)
Halbleib, J.A.; Mehlhorn, T.A.
1985-01-01
The TIGER series of time-independent coupled electron/photon Monte Carlo transport codes is a group of multimaterial, multidimensional codes designed to provide a state-of-the-art description of the production and transport of the electron/photon cascade. The codes follow both electrons and photons from 1.0 GeV down to 1.0 keV, and the user has the option of combining the collisional transport with transport in macroscopic electric and magnetic fields of arbitrary spatial dependence. Source particles can be either electrons or photons. The most important output data are (a) charge and energy deposition profiles, (b) integral and differential escape coefficients for both electrons and photons, (c) differential electron and photon flux, and (d) pulse-height distributions for selected regions of the problem geometry. The base codes of the series differ from one another primarily in their dimensionality and geometric modeling. They include (a) a one-dimensional multilayer code, (b) a code that describes the transport in two-dimensional axisymmetric cylindrical material geometries with a fully three-dimensional description of particle trajectories, and (c) a general three-dimensional transport code which employs a combinatorial geometry scheme. These base codes were designed primarily for describing radiation transport for those situations in which the detailed atomic structure of the transport medium is not important. For some applications, it is desirable to have a more detailed model of the low energy transport. The system includes three additional codes that contain a more elaborate ionization/relaxation model than the base codes. Finally, the system includes two codes that combine the collisional transport of the multidimensional base codes with transport in macroscopic electric and magnetic fields of arbitrary spatial dependence
Computer codes in particle transport physics
International Nuclear Information System (INIS)
Pesic, M.
2004-01-01
Simulation of transport and interaction of various particles in complex media and wide energy range (from 1 MeV up to 1 TeV) is very complicated problem that requires valid model of a real process in nature and appropriate solving tool - computer code and data library. A brief overview of computer codes based on Monte Carlo techniques for simulation of transport and interaction of hadrons and ions in wide energy range in three dimensional (3D) geometry is shown. Firstly, a short attention is paid to underline the approach to the solution of the problem - process in nature - by selection of the appropriate 3D model and corresponding tools - computer codes and cross sections data libraries. Process of data collection and evaluation from experimental measurements and theoretical approach to establishing reliable libraries of evaluated cross sections data is Ion g, difficult and not straightforward activity. For this reason, world reference data centers and specialized ones are acknowledged, together with the currently available, state of art evaluated nuclear data libraries, as the ENDF/B-VI, JEF, JENDL, CENDL, BROND, etc. Codes for experimental and theoretical data evaluations (e.g., SAMMY and GNASH) together with the codes for data processing (e.g., NJOY, PREPRO and GRUCON) are briefly described. Examples of data evaluation and data processing to generate computer usable data libraries are shown. Among numerous and various computer codes developed in transport physics of particles, the most general ones are described only: MCNPX, FLUKA and SHIELD. A short overview of basic application of these codes, physical models implemented with their limitations, energy ranges of particles and types of interactions, is given. General information about the codes covers also programming language, operation system, calculation speed and the code availability. An example of increasing computation speed of running MCNPX code using a MPI cluster compared to the code sequential option
Suspended particles, colloids and radionuclide transport
International Nuclear Information System (INIS)
Chapman, N.; McKinley, I.; Shea, M.; Smellie, J.
1993-01-01
Radionuclide can be transported either in true solution or associated with suspended particles and colloids. The definitions of colloids and suspended particles are introduced and the mechanisms by which they can influence radionuclide transport discussed. The aim of the Pocos de Caldas investigations was to characterise the natural particulate material in the groundwater, to investigate the association of trace elements with this material and to obtain information on the stability and mobility of the particles. The concentration of suspended particles measured in the groundwater samples were low; the particles also appear to be immobile. (author) 4 figs
Acceleration of a Monte Carlo radiation transport code
International Nuclear Information System (INIS)
Hochstedler, R.D.; Smith, L.M.
1996-01-01
Execution time for the Integrated TIGER Series (ITS) Monte Carlo radiation transport code has been reduced by careful re-coding of computationally intensive subroutines. Three test cases for the TIGER (1-D slab geometry), CYLTRAN (2-D cylindrical geometry), and ACCEPT (3-D arbitrary geometry) codes were identified and used to benchmark and profile program execution. Based upon these results, sixteen top time-consuming subroutines were examined and nine of them modified to accelerate computations with equivalent numerical output to the original. The results obtained via this study indicate that speedup factors of 1.90 for the TIGER code, 1.67 for the CYLTRAN code, and 1.11 for the ACCEPT code are achievable. copyright 1996 American Institute of Physics
Error reduction techniques for Monte Carlo neutron transport calculations
International Nuclear Information System (INIS)
Ju, J.H.W.
1981-01-01
Monte Carlo methods have been widely applied to problems in nuclear physics, mathematical reliability, communication theory, and other areas. The work in this thesis is developed mainly with neutron transport applications in mind. For nuclear reactor and many other applications, random walk processes have been used to estimate multi-dimensional integrals and obtain information about the solution of integral equations. When the analysis is statistically based such calculations are often costly, and the development of efficient estimation techniques plays a critical role in these applications. All of the error reduction techniques developed in this work are applied to model problems. It is found that the nearly optimal parameters selected by the analytic method for use with GWAN estimator are nearly identical to parameters selected by the multistage method. Modified path length estimation (based on the path length importance measure) leads to excellent error reduction in all model problems examined. Finally, it should be pointed out that techniques used for neutron transport problems may be transferred easily to other application areas which are based on random walk processes. The transport problems studied in this dissertation provide exceptionally severe tests of the error reduction potential of any sampling procedure. It is therefore expected that the methods of this dissertation will prove useful in many other application areas
OGRE, Monte-Carlo System for Gamma Transport Problems
International Nuclear Information System (INIS)
1984-01-01
1 - Nature of physical problem solved: The OGRE programme system was designed to calculate, by Monte Carlo methods, any quantity related to gamma-ray transport. The system is represented by two examples - OGRE-P1 and OGRE-G. The OGRE-P1 programme is a simple prototype which calculates dose rate on one side of a slab due to a plane source on the other side. The OGRE-G programme, a prototype of a programme utilizing a general-geometry routine, calculates dose rate at arbitrary points. A very general source description in OGRE-G may be employed by reading a tape prepared by the user. 2 - Method of solution: Case histories of gamma rays in the prescribed geometry are generated and analyzed to produce averages of any desired quantity which, in the case of the prototypes, are gamma-ray dose rates. The system is designed to achieve generality by ease of modification. No importance sampling is built into the prototypes, a very general geometry subroutine permits the treatment of complicated geometries. This is essentially the same routine used in the O5R neutron transport system. Boundaries may be either planes or quadratic surfaces, arbitrarily oriented and intersecting in arbitrary fashion. Cross section data is prepared by the auxiliary master cross section programme XSECT which may be used to originate, update, or edit the master cross section tape. The master cross section tape is utilized in the OGRE programmes to produce detailed tables of macroscopic cross sections which are used during the Monte Carlo calculations. 3 - Restrictions on the complexity of the problem: Maximum cross-section array information may be estimated by a given formula for a specific problem. The number of regions must be less than or equal to 50
Žukauskaite, A; Plukiene, R; Plukis, A
2007-01-01
Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 – γ-ray beams (1-10 MeV), HIMAC and ISIS-800 – high energy neutrons (20-800 MeV) transport in iron and concrete. The results were then compared with experimental data.
Non deterministic methods for charged particle transport
International Nuclear Information System (INIS)
Besnard, D.C.; Buresi, E.; Hermeline, F.; Wagon, F.
1985-04-01
The coupling of Monte-Carlo methods for solving Fokker Planck equation with ICF inertial confinement fusion codes requires them to be economical and to preserve gross conservation properties. Besides, the presence in FPE Fokker-Planck equation of diffusion terms due to collisions between test particles and the background plasma challenges standard M.C. (Monte-Carlo) techniques if this phenomenon is dominant. We address these problems through the use of a fixed mesh in phase space which allows us to handle highly variable sources, avoiding any Russian Roulette for lowering the size of the sample. Also on this mesh are solved diffusion equations obtained from a splitting of FPE. Any non linear diffusion terms of FPE can be handled in this manner. Another method, also presented here is to use a direct particle method for solving the full FPE
MCPT: A Monte Carlo code for simulation of photon transport in tomographic scanners
International Nuclear Information System (INIS)
Prettyman, T.H.; Gardner, R.P.; Verghese, K.
1990-01-01
MCPT is a special-purpose Monte Carlo code designed to simulate photon transport in tomographic scanners. Variance reduction schemes and sampling games present in MCPT were selected to characterize features common to most tomographic scanners. Combined splitting and biasing (CSB) games are used to systematically sample important detection pathways. An efficient splitting game is used to tally particle energy deposition in detection zones. The pulse height distribution of each detector can be found by convolving the calculated energy deposition distribution with the detector's resolution function. A general geometric modelling package, HERMETOR, is used to describe the geometry of the tomographic scanners and provide MCPT information needed for particle tracking. MCPT's modelling capabilites are described and preliminary experimental validation is presented. (orig.)
Optimal transport of particle beams
International Nuclear Information System (INIS)
Allen, C.K.; Reiser, M.
1997-01-01
The transport and matching problem for a low energy transport system is approached from a control theoretical viewpoint. We develop a model for a beam transport and matching section based on a multistage control network. To this model we apply the principles of optimal control to formulate techniques aiding in the design of the transport and matching section. Both nonlinear programming and dynamic programming techniques are used in the optimization. These techniques are implemented in a computer-aided design program called SPOT. Examples are presented to demonstrate the procedure and outline the results. (orig.)
Particle and heat transport in Tokamaks
International Nuclear Information System (INIS)
Chatelier, M.
1984-01-01
A limitation to performances of tokamaks is heat transport through magnetic surfaces. Principles of ''classical'' or ''neoclassical'' transport -i.e. transport due to particle and heat fluxes due to Coulomb scattering of charged particle in a magnetic field- are exposed. It is shown that beside this classical effect, ''anomalous'' transport occurs; it is associated to the existence of fluctuating electric or magnetic fields which can appear in the plasma as a result of charge and current perturbations. Tearing modes and drift wave instabilities are taken as typical examples. Experimental features are presented which show that ions behave approximately in a classical way whereas electrons are strongly anomalous [fr
Particle Transport in Parallel-Plate Reactors
Energy Technology Data Exchange (ETDEWEB)
Rader, D.J.; Geller, A.S.
1999-08-01
A major cause of semiconductor yield degradation is contaminant particles that deposit on wafers while they reside in processing tools during integrated circuit manufacturing. This report presents numerical models for assessing particle transport and deposition in a parallel-plate geometry characteristic of a wide range of single-wafer processing tools: uniform downward flow exiting a perforated-plate showerhead separated by a gap from a circular wafer resting on a parallel susceptor. Particles are assumed to originate either upstream of the showerhead or from a specified position between the plates. The physical mechanisms controlling particle deposition and transport (inertia, diffusion, fluid drag, and external forces) are reviewed, with an emphasis on conditions encountered in semiconductor process tools (i.e., sub-atmospheric pressures and submicron particles). Isothermal flow is assumed, although small temperature differences are allowed to drive particle thermophoresis. Numerical solutions of the flow field are presented which agree with an analytic, creeping-flow expression for Re < 4. Deposition is quantified by use of a particle collection efficiency, which is defined as the fraction of particles in the reactor that deposit on the wafer. Analytic expressions for collection efficiency are presented for the limiting case where external forces control deposition (i.e., neglecting particle diffusion and inertia). Deposition from simultaneous particle diffusion and external forces is analyzed by an Eulerian formulation; for creeping flow and particles released from a planar trap, the analysis yields an analytic, integral expression for particle deposition based on process and particle properties. Deposition from simultaneous particle inertia and external forces is analyzed by a Lagrangian formulation, which can describe inertia-enhanced deposition resulting from particle acceleration in the showerhead. An approximate analytic expression is derived for particle
Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox
DEFF Research Database (Denmark)
Nonejad, Nima
This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...
Monte Carlo simulation of particle interactions at high dynamic range : Advancing beyond the googol
Ormel, C. W.; Spaans, M.
2008-01-01
We present a method which extends Monte Carlo studies to situations that require a large dynamic range in particle number. The underlying idea is that, in order to calculate the collisional evolution of a system, some particle interactions are more important than others and require more resolution,
Stochastic transport of particles across single barriers
International Nuclear Information System (INIS)
Kreuter, Christian; Siems, Ullrich; Henseler, Peter; Nielaba, Peter; Leiderer, Paul; Erbe, Artur
2012-01-01
Transport phenomena of interacting particles are of high interest for many applications in biology and mesoscopic systems. Here we present measurements on colloidal particles, which are confined in narrow channels on a substrate and interact with a barrier, which impedes the motion along the channel. The substrate of the particle is tilted in order for the particles to be driven towards the barrier and, if the energy gained by the tilt is large enough, surpass the barrier by thermal activation. We therefore study the influence of this barrier as well as the influence of particle interaction on the particle transport through such systems. All experiments are supported with Brownian dynamics simulations in order to complement the experiments with tests of a large range of parameter space which cannot be accessed in experiments.
International Nuclear Information System (INIS)
Karriem, Z.; Ivanov, K.; Zamonsky, O.
2011-01-01
This paper presents work that has been performed to develop an integrated Monte Carlo- Deterministic transport methodology in which the two methods make use of exactly the same general geometry and multigroup nuclear data. The envisioned application of this methodology is in reactor lattice physics methods development and shielding calculations. The methodology will be based on the Method of Long Characteristics (MOC) and the Monte Carlo N-Particle Transport code MCNP5. Important initial developments pertaining to ray tracing and the development of an MOC flux solver for the proposed methodology are described. Results showing the viability of the methodology are presented for two 2-D general geometry transport problems. The essential developments presented is the use of MCNP as geometry construction and ray tracing tool for the MOC, verification of the ray tracing indexing scheme that was developed to represent the MCNP geometry in the MOC and the verification of the prototype 2-D MOC flux solver. (author)
Particle transport and deposition: basic physics of particle kinetics.
Tsuda, Akira; Henry, Frank S; Butler, James P
2013-10-01
The human body interacts with the environment in many different ways. The lungs interact with the external environment through breathing. The enormously large surface area of the lung with its extremely thin air-blood barrier is exposed to particles suspended in the inhaled air. The particle-lung interaction may cause deleterious effects on health if the inhaled pollutant aerosols are toxic. Conversely, this interaction can be beneficial for disease treatment if the inhaled particles are therapeutic aerosolized drugs. In either case, an accurate estimation of dose and sites of deposition in the respiratory tract is fundamental to understanding subsequent biological response, and the basic physics of particle motion and engineering knowledge needed to understand these subjects is the topic of this article. A large portion of this article deals with three fundamental areas necessary to the understanding of particle transport and deposition in the respiratory tract. These are: (i) the physical characteristics of particles, (ii) particle behavior in gas flow, and (iii) gas-flow patterns in the respiratory tract. Other areas, such as particle transport in the developing lung and in the diseased lung are also considered. The article concludes with a summary and a brief discussion of areas of future research. © 2013 American Physiological Society. Compr Physiol 3:1437-1471, 2013.
An improved Monte Carlo (MC) dose simulation for charged particle cancer therapy
International Nuclear Information System (INIS)
Ying, C. K.; Kamil, W. A.; Shuaib, I. L.; Matsufuji, Naruhiro
2014-01-01
Heavy-particle therapy such as carbon ion therapy are more popular nowadays because of the nature characteristics of charged particle and almost no side effect to patients. An effective treatment is achieved with high precision of dose calculation, in this research work, Geant4 based Monte Carlo simulation method has been used to calculate the radiation transport and dose distribution. The simulation have the same setting with the treatment room in Heavy Ion Medical Accelerator, HIMAC. The carbon ion beam at the isocentric gantry nozzle for the therapeutic energy of 290 MeV/u was simulated, experimental work was carried out in National Institute of Radiological Sciences, NIRS, Chiba, Japan by using the HIMAC to confirm the accuracy and qualities dose distribution by MC methods. The Geant4 based simulated dose distribution were verified with measurements for Bragg peak and spread out Bragg peak (SOBP) respectively. The verification of results shows that the Bragg peak depth-dose and SOBP distributions in simulation has good agreement with measurements. In overall, the study showed that Geant4 based can be fully applied in the heavy-ion therapy field for simulation, further works need to be carry on to refine and improve the Geant4 MC simulations
An improved Monte Carlo (MC) dose simulation for charged particle cancer therapy
International Nuclear Information System (INIS)
Ying, C.K.; Kamil, W.A.; Shuaib, I.L.; Ying, C.K.; Kamil, W.A.
2013-01-01
Full-text: Heavy-particle therapy such as carbon ion therapy are more popular nowadays because of the nature characteristics of charged particle and almost no side effect to patients. An effective treatment is achieved with high precision of dose calculation, in this research work, Geant4 based Monte Carlo simulation method has been used to calculate the radiation transport and dose distribution. The simulation have the same setting with the treatment room in Heavy Ion Medical Accelerator, HIMAC. The carbon ion beam at the isocentric gantry nozzle for the therapeutic energy of 290 MeV/u was simulated, experimental work was carried out in National Institute of Radiological Sciences, NIRS, Chiba, Japan by using the HIMAC to confirm the accuracy and qualities dose distribution by MC methods. The Geant4 based simulated dose distribution were verified with measurements for Bragg peak and spread out Bragg peak (SOBP) respectively. The verification of results shows that the Bragg peak depth-dose and SOBP distributions in simulation has good agreement with measurements. In overall, the study showed that Geant4 based can be fully applied in the heavy ion therapy field for simulation, further works need to be carry on to refine and improve the Geant4 MC simulations. (author)
An improved Monte Carlo (MC) dose simulation for charged particle cancer therapy
Energy Technology Data Exchange (ETDEWEB)
Ying, C. K. [Advanced Medical and Dental Institute, AMDI, Universiti Sains Malaysia, Penang, Malaysia and School of Medical Sciences, Universiti Sains Malaysia, Kota Bharu (Malaysia); Kamil, W. A. [Advanced Medical and Dental Institute, AMDI, Universiti Sains Malaysia, Penang, Malaysia and Radiology Department, Hospital USM, Kota Bharu (Malaysia); Shuaib, I. L. [Advanced Medical and Dental Institute, AMDI, Universiti Sains Malaysia, Penang (Malaysia); Matsufuji, Naruhiro [Research Centre of Charged Particle Therapy, National Institute of Radiological Sciences, NIRS, Chiba (Japan)
2014-02-12
Heavy-particle therapy such as carbon ion therapy are more popular nowadays because of the nature characteristics of charged particle and almost no side effect to patients. An effective treatment is achieved with high precision of dose calculation, in this research work, Geant4 based Monte Carlo simulation method has been used to calculate the radiation transport and dose distribution. The simulation have the same setting with the treatment room in Heavy Ion Medical Accelerator, HIMAC. The carbon ion beam at the isocentric gantry nozzle for the therapeutic energy of 290 MeV/u was simulated, experimental work was carried out in National Institute of Radiological Sciences, NIRS, Chiba, Japan by using the HIMAC to confirm the accuracy and qualities dose distribution by MC methods. The Geant4 based simulated dose distribution were verified with measurements for Bragg peak and spread out Bragg peak (SOBP) respectively. The verification of results shows that the Bragg peak depth-dose and SOBP distributions in simulation has good agreement with measurements. In overall, the study showed that Geant4 based can be fully applied in the heavy-ion therapy field for simulation, further works need to be carry on to refine and improve the Geant4 MC simulations.
Monte Carlo impurity transport modeling in the DIII-D transport
International Nuclear Information System (INIS)
Evans, T.E.; Finkenthal, D.F.
1998-04-01
A description of the carbon transport and sputtering physics contained in the Monte Carlo Impurity (MCI) transport code is given. Examples of statistically significant carbon transport pathways are examined using MCI's unique tracking visualizer and a mechanism for enhanced carbon accumulation on the high field side of the divertor chamber is discussed. Comparisons between carbon emissions calculated with MCI and those measured in the DIII-D tokamak are described. Good qualitative agreement is found between 2D carbon emission patterns calculated with MCI and experimentally measured carbon patterns. While uncertainties in the sputtering physics, atomic data, and transport models have made quantitative comparisons with experiments more difficult, recent results using a physics based model for physical and chemical sputtering has yielded simulations with about 50% of the total carbon radiation measured in the divertor. These results and plans for future improvement in the physics models and atomic data are discussed
Capturing inertial particle transport in turbulent flows
Stott, Harry; Lawrie, Andrew; Szalai, Robert
2017-11-01
The natural world is replete with examples of particle advection; mankind is both a beneficiary from and sufferer of the consequences. As such, the study of inertial particle dynamics, both aerosol and bubble, is vitally important. In many interesting examples such as cloud microphysics, sedimentation, or sewage transport, many millions of particles are advected in a relatively small volume of fluid. It is impossible to model these processes computationally and simulate every particle. Instead, we advect the probability density field of particle positions allowing unbiased sampling of particle behaviour across the domain. Given a 3-dimensional space discretised into cubes, we construct a transport operator that encodes the flow of particles through the faces of the cubes. By assuming that the dynamics of the particles lie close to an inertial manifold, it is possible to preserve the majority of the inertial properties of the particles between the time steps. We demonstrate the practical use of this method in a pair of instances: the first is an analogue to cloud microphysics- the turbulent breakdown of Taylor Green vortices; the second example is the case of a turbulent jet which has application both in sewage pipe outflow and pesticide spray dynamics. EPSRC.
Approximate models for neutral particle transport calculations in ducts
International Nuclear Information System (INIS)
Ono, Shizuca
2000-01-01
The problem of neutral particle transport in evacuated ducts of arbitrary, but axially uniform, cross-sectional geometry and isotropic reflection at the wall is studied. The model makes use of basis functions to represent the transverse and azimuthal dependences of the particle angular flux in the duct. For the approximation in terms of two basis functions, an improvement in the method is implemented by decomposing the problem into uncollided and collided components. A new quadrature set, more suitable to the problem, is developed and generated by one of the techniques of the constructive theory of orthogonal polynomials. The approximation in terms of three basis functions is developed and implemented to improve the precision of the results. For both models of two and three basis functions, the energy dependence of the problem is introduced through the multigroup formalism. The results of sample problems are compared to literature results and to results of the Monte Carlo code, MCNP. (author)
Charged-particle transport in one-dimensional systems
International Nuclear Information System (INIS)
Muthukrishnan, G.; Gopinath, D.V.
1983-01-01
A semianalytical technique to study the charged-particle transport in one-dimensional finite media is developed. For this purpose, the transport equation is written in the form of coupled integral equations, separating the spatial and energy-angle transmissions. Legendre polynomial representations for the source, flux, and scattering kernel are used to solve the equations. For evaluation of the spatial transmission, discrete ordinate representation in space, energy, and direction cosine is used for the particle and source flux. The integral equations are then solved by the fast iteration technique. The computer code CHASFIT, written on the basis of the above formulation, is described. The fast convergence of the iteration process which is characteristic of charged-particle transport is demonstrated. Convergence studies are carried out with a number of mesh points and polynomial approximations. The method is applied to study the depth-dose distributions due to 140-, 200-, 300-, 400-, 600-, and 740-MeV protons incident normally on a 30-cm-thick tissue slab. The values of the quality factor at the surface and at 5 cm depth, as well as the total average quality factor, are calculated. The results thus obtained are compared with those predicted by the Monte Carlo method. This method can also be applied to multienergy, multiregion systems with arbitrary degree of anisotropy
Limits on the Efficiency of Event-Based Algorithms for Monte Carlo Neutron Transport
Energy Technology Data Exchange (ETDEWEB)
Romano, Paul K.; Siegel, Andrew R.
2017-04-16
The traditional form of parallelism in Monte Carlo particle transport simulations, wherein each individual particle history is considered a unit of work, does not lend itself well to data-level parallelism. Event-based algorithms, which were originally used for simulations on vector processors, may offer a path toward better utilizing data-level parallelism in modern computer architectures. In this study, a simple model is developed for estimating the efficiency of the event-based particle transport algorithm under two sets of assumptions. Data collected from simulations of four reactor problems using OpenMC was then used in conjunction with the models to calculate the speedup due to vectorization as a function of two parameters: the size of the particle bank and the vector width. When each event type is assumed to have constant execution time, the achievable speedup is directly related to the particle bank size. We observed that the bank size generally needs to be at least 20 times greater than vector size in order to achieve vector efficiency greater than 90%. When the execution times for events are allowed to vary, however, the vector speedup is also limited by differences in execution time for events being carried out in a single event-iteration. For some problems, this implies that vector effciencies over 50% may not be attainable. While there are many factors impacting performance of an event-based algorithm that are not captured by our model, it nevertheless provides insights into factors that may be limiting in a real implementation.
Improved cache performance in Monte Carlo transport calculations using energy banding
Siegel, A.; Smith, K.; Felker, K.; Romano, P.; Forget, B.; Beckman, P.
2014-04-01
We present an energy banding algorithm for Monte Carlo (MC) neutral particle transport simulations which depend on large cross section lookup tables. In MC codes, read-only cross section data tables are accessed frequently, exhibit poor locality, and are typically too much large to fit in fast memory. Thus, performance is often limited by long latencies to RAM, or by off-node communication latencies when the data footprint is very large and must be decomposed on a distributed memory machine. The proposed energy banding algorithm allows maximal temporal reuse of data in band sizes that can flexibly accommodate different architectural features. The energy banding algorithm is general and has a number of benefits compared to the traditional approach. In the present analysis we explore its potential to achieve improvements in time-to-solution on modern cache-based architectures.
A Monte Carlo transport code study of the space radiation environment using FLUKA and ROOT
Wilson, T; Carminati, F; Brun, R; Ferrari, A; Sala, P; Empl, A; MacGibbon, J
2001-01-01
We report on the progress of a current study aimed at developing a state-of-the-art Monte-Carlo computer simulation of the space radiation environment using advanced computer software techniques recently available at CERN, the European Laboratory for Particle Physics in Geneva, Switzerland. By taking the next-generation computer software appearing at CERN and adapting it to known problems in the implementation of space exploration strategies, this research is identifying changes necessary to bring these two advanced technologies together. The radiation transport tool being developed is tailored to the problem of taking measured space radiation fluxes impinging on the geometry of any particular spacecraft or planetary habitat and simulating the evolution of that flux through an accurate model of the spacecraft material. The simulation uses the latest known results in low-energy and high-energy physics. The output is a prediction of the detailed nature of the radiation environment experienced in space as well a...
International Nuclear Information System (INIS)
Weinhorst, Bastian; Fischer, Ulrich; Lu, Lei; Qiu, Yuefeng; Wilson, Paul
2015-01-01
Highlights: • Comparison of different approaches for the use of CAD geometry for Monte Carlo transport calculations. • Comparison with regard to user-friendliness and computation performance. • Three approaches, namely conversion with McCad, unstructured mesh feature of MCN6 and DAGMC. • Installation most complex for DAGMC, model preparation worst for McCad, computation performance worst for MCNP6. • Installation easiest for McCad, model preparation best for MCNP6, computation speed fastest for McCad. - Abstract: Computer aided design (CAD) is an important industrial way to produce high quality designs. Therefore, CAD geometries are in general used for engineering and the design of complex facilities like the ITER tokamak. Although Monte Carlo codes like MCNP are well suited to handle the complex 3D geometry of ITER for transport calculations, they rely on their own geometry description and are in general not able to directly use the CAD geometry. In this paper, three different approaches for the use of CAD geometries with MCNP calculations are investigated and assessed with regard to calculation performance and user-friendliness. The first method is the conversion of the CAD geometry into MCNP geometry employing the conversion software McCad developed by KIT. The second approach utilizes the MCNP6 mesh geometry feature for the particle tracking and relies on the conversion of the CAD geometry into a mesh model. The third method employs DAGMC, developed by the University of Wisconsin-Madison, for the direct particle tracking on the CAD geometry using a patched version of MCNP. The obtained results show that each method has its advantages depending on the complexity and size of the model, the calculation problem considered, and the expertise of the user.
Transport of Charged Particles in Turbulent Magnetic Fields
Parashar, T.; Subedi, P.; Sonsrettee, W.; Blasi, P.; Ruffolo, D. J.; Matthaeus, W. H.; Montgomery, D.; Chuychai, P.; Dmitruk, P.; Wan, M.; Chhiber, R.
2017-12-01
Magnetic fields permeate the Universe. They are found in planets, stars, galaxies, and the intergalactic medium. The magnetic field found in these astrophysical systems are usually chaotic, disordered, and turbulent. The investigation of the transport of cosmic rays in magnetic turbulence is a subject of considerable interest. One of the important aspects of cosmic ray transport is to understand their diffusive behavior and to calculate the diffusion coefficient in the presence of these turbulent fields. Research has most frequently concentrated on determining the diffusion coefficient in the presence of a mean magnetic field. Here, we will particularly focus on calculating diffusion coefficients of charged particles and magnetic field lines in a fully three-dimensional isotropic turbulent magnetic field with no mean field, which may be pertinent to many astrophysical situations. For charged particles in isotropic turbulence we identify different ranges of particle energy depending upon the ratio of the Larmor radius of the charged particle to the characteristic outer length scale of the turbulence. Different theoretical models are proposed to calculate the diffusion coefficient, each applicable to a distinct range of particle energies. The theoretical ideas are tested against results of detailed numerical experiments using Monte-Carlo simulations of particle propagation in stochastic magnetic fields. We also discuss two different methods of generating random magnetic field to study charged particle propagation using numerical simulation. One method is the usual way of generating random fields with a specified power law in wavenumber space, using Gaussian random variables. Turbulence, however, is non-Gaussian, with variability that comes in bursts called intermittency. We therefore devise a way to generate synthetic intermittent fields which have many properties of realistic turbulence. Possible applications of such synthetically generated intermittent fields are
(U) Introduction to Monte Carlo Methods
Energy Technology Data Exchange (ETDEWEB)
Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-03-20
Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.
Energy Technology Data Exchange (ETDEWEB)
Clouet, J.F.; Samba, G. [CEA Bruyeres-le-Chatel, 91 (France)
2005-07-01
We use asymptotic analysis to study the diffusion limit of the Symbolic Implicit Monte-Carlo (SIMC) method for the transport equation. For standard SIMC with piecewise constant basis functions, we demonstrate mathematically that the solution converges to the solution of a wrong diffusion equation. Nevertheless a simple extension to piecewise linear basis functions enables to obtain the correct solution. This improvement allows the calculation in opaque medium on a mesh resolving the diffusion scale much larger than the transport scale. Anyway, the huge number of particles which is necessary to get a correct answer makes this computation time consuming. Thus, we have derived from this asymptotic study an hybrid method coupling deterministic calculation in the opaque medium and Monte-Carlo calculation in the transparent medium. This method gives exactly the same results as the previous one but at a much lower price. We present numerical examples which illustrate the analysis. (authors)
Monte Carlo analysis of radiative transport in oceanographic lidar measurements
Energy Technology Data Exchange (ETDEWEB)
Cupini, E.; Ferro, G. [ENEA, Divisione Fisica Applicata, Centro Ricerche Ezio Clementel, Bologna (Italy); Ferrari, N. [Bologna Univ., Bologna (Italy). Dipt. Ingegneria Energetica, Nucleare e del Controllo Ambientale
2001-07-01
The analysis of oceanographic lidar systems measurements is often carried out with semi-empirical methods, since there is only a rough understanding of the effects of many environmental variables. The development of techniques for interpreting the accuracy of lidar measurements is needed to evaluate the effects of various environmental situations, as well as of different experimental geometric configurations and boundary conditions. A Monte Carlo simulation model represents a tool that is particularly well suited for answering these important questions. The PREMAR-2F Monte Carlo code has been developed taking into account the main molecular and non-molecular components of the marine environment. The laser radiation interaction processes of diffusion, re-emission, refraction and absorption are treated. In particular are considered: the Rayleigh elastic scattering, produced by atoms and molecules with small dimensions with respect to the laser emission wavelength (i.e. water molecules), the Mie elastic scattering, arising from atoms or molecules with dimensions comparable to the laser wavelength (hydrosols), the Raman inelastic scattering, typical of water, the absorption of water, inorganic (sediments) and organic (phytoplankton and CDOM) hydrosols, the fluorescence re-emission of chlorophyll and yellow substances. PREMAR-2F is an extension of a code for the simulation of the radiative transport in atmospheric environments (PREMAR-2). The approach followed in PREMAR-2 was to combine conventional Monte Carlo techniques with analytical estimates of the probability of the receiver to have a contribution from photons coming back after an interaction in the field of view of the lidar fluorosensor collecting apparatus. This offers an effective mean for modelling a lidar system with realistic geometric constraints. The retrieved semianalytic Monte Carlo radiative transfer model has been developed in the frame of the Italian Research Program for Antarctica (PNRA) and it is
The electron transport problem sampling by Monte Carlo individual collision technique
International Nuclear Information System (INIS)
Androsenko, P.A.; Belousov, V.I.
2005-01-01
The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)
Simulation of neutron transport equation using parallel Monte Carlo for deep penetration problems
International Nuclear Information System (INIS)
Bekar, K. K.; Tombakoglu, M.; Soekmen, C. N.
2001-01-01
Neutron transport equation is simulated using parallel Monte Carlo method for deep penetration neutron transport problem. Monte Carlo simulation is parallelized by using three different techniques; direct parallelization, domain decomposition and domain decomposition with load balancing, which are used with PVM (Parallel Virtual Machine) software on LAN (Local Area Network). The results of parallel simulation are given for various model problems. The performances of the parallelization techniques are compared with each other. Moreover, the effects of variance reduction techniques on parallelization are discussed
FLUKA: A Multi-Particle Transport Code
Energy Technology Data Exchange (ETDEWEB)
Ferrari, A.; Sala, P.R.; /CERN /INFN, Milan; Fasso, A.; /SLAC; Ranft, J.; /Siegen U.
2005-12-14
This report describes the 2005 version of the Fluka particle transport code. The first part introduces the basic notions, describes the modular structure of the system, and contains an installation and beginner's guide. The second part complements this initial information with details about the various components of Fluka and how to use them. It concludes with a detailed history and bibliography.
SU-E-T-558: Monte Carlo Photon Transport Simulations On GPU with Quadric Geometry
International Nuclear Information System (INIS)
Chi, Y; Tian, Z; Jiang, S; Jia, X
2015-01-01
Purpose: Monte Carlo simulation on GPU has experienced rapid advancements over the past a few years and tremendous accelerations have been achieved. Yet existing packages were developed only in voxelized geometry. In some applications, e.g. radioactive seed modeling, simulations in more complicated geometry are needed. This abstract reports our initial efforts towards developing a quadric geometry module aiming at expanding the application scope of GPU-based MC simulations. Methods: We defined the simulation geometry consisting of a number of homogeneous bodies, each specified by its material composition and limiting surfaces characterized by quadric functions. A tree data structure was utilized to define geometric relationship between different bodies. We modified our GPU-based photon MC transport package to incorporate this geometry. Specifically, geometry parameters were loaded into GPU’s shared memory for fast access. Geometry functions were rewritten to enable the identification of the body that contains the current particle location via a fast searching algorithm based on the tree data structure. Results: We tested our package in an example problem of HDR-brachytherapy dose calculation for shielded cylinder. The dose under the quadric geometry and that under the voxelized geometry agreed in 94.2% of total voxels within 20% isodose line based on a statistical t-test (95% confidence level), where the reference dose was defined to be the one at 0.5cm away from the cylinder surface. It took 243sec to transport 100million source photons under this quadric geometry on an NVidia Titan GPU card. Compared with simulation time of 99.6sec in the voxelized geometry, including quadric geometry reduced efficiency due to the complicated geometry-related computations. Conclusion: Our GPU-based MC package has been extended to support photon transport simulation in quadric geometry. Satisfactory accuracy was observed with a reduced efficiency. Developments for charged
Monte Carlo simulation of beta particle-induced bremsstrahlung doses.
Mrdja, D; Bikit, K; Bikit, I; Slivka, J; Forkapic, S; Knezevic, J
2018-03-01
It is well known that protection from the external irradiation produced by beta emitters is simpler than the corresponding shielding of radioactive sources that emit gamma radiation. This is caused by the relatively strong absorption (i.e. short range) of electrons in different materials. However, for strong beta sources specific attention should be paid to the bremsstrahlung radiation induced in the source encapsulation (matrix), especially for emitters with relatively high beta-endpoint energy (1 MeV) that are frequently used in nuclear medicine. In the present work, the bremsstrahlung spectra produced in various materials by the following beta emitters, Sr-90 (together with its daughter Y-90), P-32 and Bi-210, were investigated by Monte Carlo simulations using Geant4 software. In these simulations, it is supposed that the point radioactive sources are surrounded by cylindrically shaped capsules made from different materials: Pb, Cu, Al, glass and plastic. For the case of Y-90(Sr-90) in cylindrical lead and aluminum capsules, the dimensions of these capsules have also been varied. The absorbed dose rates from bremsstrahlung radiation were calculated for cases where the encapsulated point source is placed at a distance of 30 mm from the surface of a water cylinder with a mass of 75 kg (approximately representing the human body). The bremsstrahlung dose rate and bremsstrahlung spectrum from the Y-90(Sr-90) point source encapsulated in an Al capsule were also measured experimentally and compared with the corresponding simulation results. In addition, the bremsstrahlung radiation risk for medical staff in therapies using Y-90 was considered in simulations, relating to finger dose as well as whole-body dose during preparation and injection of this radioisotope. The corresponding annual doses were obtained for medical workers for specified numbers of Y-90 applications to patients.
Overview of Particle and Heavy Ion Transport Code System PHITS
Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit
2014-06-01
A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.
ETRAN, Electron Transport and Gamma Transport with Secondary Radiation in Slab by Monte-Carlo
International Nuclear Information System (INIS)
1992-01-01
A - Nature of physical problem solved: ETRAN computes the transport of electrons and photons through plane-parallel slab targets that have a finite thickness in one dimension and are unbound in the other two-dimensions. The incident radiation can consist of a beam of either electrons or photons with specified spectral and directional distribution. Options are available by which all orders of the electron-photon cascade can be included in the calculation. Thus electrons are allowed to give rise to secondary knock-on electrons, continuous Bremsstrahlung and characteristic x-rays; and photons are allowed to produce photo-electrons, Compton electrons, and electron- positron pairs. Annihilation quanta, fluorescence radiation, and Auger electrons are also taken into account. If desired, the Monte- Carlo histories of all generations of secondary radiations are followed. The information produced by ETRAN includes the following items: 1) reflection and transmission of electrons or photons, differential in energy and direction; 2) the production of continuous Bremsstrahlung and characteristic x-rays by electrons and the emergence of such radiations from the target (differential in photon energy and direction); 3) the spectrum of the amounts of energy left behind in a thick target by an incident electron beam; 4) the deposition of energy and charge by an electron beam as function of the depth in the target; 5) the flux of electrons, differential in energy, as function of the depth in the target. B - Method of solution: A programme called DATAPAC-4 takes data for a particular material from a library tape and further processes them. The function of DATAPAC-4 is to produce single-scattering and multiple-scattering data in the form of tabular arrays (again stored on magnetic tape) which facilitate the rapid sampling of electron and photon Monte Carlo histories in ETRAN. The photon component of the electron-photon cascade is calculated by conventional random sampling that imitates
Particle Transport in ECRH Plasmas of the TJ-II
International Nuclear Information System (INIS)
Vargas, V. I.; Lopez-Bruna, D.; Estrada, T.; Guasp, J.; Reynolds, J. M.; Velasco, J. L.; Herranz, J.
2007-01-01
We present a systematic study of particle transport in ECRH plasmas of TJ-II with different densities. The goal is to fi nd particle confinement time and electron diffusivity dependence with line-averaged density. The experimental information consists of electron temperature profiles, T e (Thomson Scattering TS) and electron density, n e , (TS and reflectometry) and measured puffing data in stationary discharges. The profile of the electron source, Se, was obtained by the 3D Monte-Carlo code EIRENE. The analysis of particle balance has been done by linking the results of the code EIRENE with the results of a model that reproduces ECRH plasmas in stationary conditions. In the range of densities studied (0.58 ≤n e > (10 1 9m - 3) ≤0.80) there are two regions of confinement separated by a threshold density, e > ∼0.65 10 1 9m - 3. Below this threshold density the particle confinement time is low, and vice versa. This is reflected in the effective diffusivity, D e , which in the range of validity of this study, 0.5 e are flat for ≥0,63(10 1 9m - 3). (Author) 35 refs
Radiation protection studies for medical particle accelerators using FLUKA Monte Carlo code
International Nuclear Information System (INIS)
Infantino, Angelo; Mostacci, Domiziano; Cicoria, Gianfranco; Lucconi, Giulia; Pancaldi, Davide; Vichi, Sara; Zagni, Federico; Marengo, Mario
2017-01-01
Radiation protection (RP) in the use of medical cyclotrons involves many aspects both in the routine use and for the decommissioning of a site. Guidelines for site planning and installation, as well as for RP assessment, are given in international documents; however, the latter typically offer analytic methods of calculation of shielding and materials activation, in approximate or idealised geometry set-ups. The availability of Monte Carlo (MC) codes with accurate up-to-date libraries for transport and interaction of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of modern computers, makes the systematic use of simulations with realistic geometries possible, yielding equipment and site-specific evaluation of the source terms, shielding requirements and all quantities relevant to RP at the same time. In this work, the well-known FLUKA MC code was used to simulate different aspects of RP in the use of biomedical accelerators, particularly for the production of medical radioisotopes. In the context of the Young Professionals Award, held at the IRPA 14 conference, only a part of the complete work is presented. In particular, the simulation of the GE PETtrace cyclotron (16.5 MeV) installed at S. Orsola-Malpighi University Hospital evaluated the effective dose distribution around the equipment; the effective number of neutrons produced per incident proton and their spectral distribution; the activation of the structure of the cyclotron and the vault walls; the activation of the ambient air, in particular the production of 41 Ar. The simulations were validated, in terms of physical and transport parameters to be used at the energy range of interest, through an extensive measurement campaign of the neutron environmental dose equivalent using a rem-counter and TLD dosemeters. The validated model was then used in the design and the licensing request of a new Positron Emission Tomography facility. (authors)
MCNP: a general Monte Carlo code for neutron and photon transport
International Nuclear Information System (INIS)
1979-11-01
The general-purpose Monte Carlo code MCNP ca be used for neutron, photon, or coupled neutron-photon transport, including the capability to calculate eigenvalues for critical systems. The code treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and some special fourth-degree surfaces (elliptical tori). Pointwise cross-section data are used. For neutrons, all reactions given in a particular cross-section evaluation are accounted for. Thermal neutrons are described by both the free-gas and S(α,β) models. For photons, the code takes account of incoherent and coherent scattering, the possibility of fluorescent emission following photoelectric absorption, and absorption in pair production with local emission of annihilation radiation. MCNP includes an elaborate, interactive plotting capability that allows the user to view his input geometry to help check for setup errors. Standard features which are available to improve computational efficiency include geometry splitting and Russian roulette, weight cutoff with Russian roulette, correlated sampling, analog capture or capture by weight reduction, the exponential transformation, energy splitting, forced collisions in designated cells, flux estimates at point or ring detectors, deterministically transporting pseudo-particles to designated regions, track-length estimators, source biasing, and several parameter cutoffs. Extensive summary information is provided to help the user better understand the physics and Monte Carlo simulation of his problem. The standard, user-defined output of MCNP includes two-way current as a function of direction across any set of surfaces or surface segments in the problem. Flux across any set of surfaces or surface segments is available. 58 figures, 28 tables
Bahadori, Amir Alexander
Astronauts are exposed to a unique radiation environment in space. United States terrestrial radiation worker limits, derived from guidelines produced by scientific panels, do not apply to astronauts. Limits for astronauts have changed throughout the Space Age, eventually reaching the current National Aeronautics and Space Administration limit of 3% risk of exposure induced death, with an administrative stipulation that the risk be assured to the upper 95% confidence limit. Much effort has been spent on reducing the uncertainty associated with evaluating astronaut risk for radiogenic cancer mortality, while tools that affect the accuracy of the calculations have largely remained unchanged. In the present study, the impacts of using more realistic computational phantoms with size variability to represent astronauts with simplified deterministic radiation transport were evaluated. Next, the impacts of microgravity-induced body changes on space radiation dosimetry using the same transport method were investigated. Finally, dosimetry and risk calculations resulting from Monte Carlo radiation transport were compared with results obtained using simplified deterministic radiation transport. The results of the present study indicated that the use of phantoms that more accurately represent human anatomy can substantially improve space radiation dose estimates, most notably for exposures from solar particle events under light shielding conditions. Microgravity-induced changes were less important, but results showed that flexible phantoms could assist in optimizing astronaut body position for reducing exposures during solar particle events. Finally, little overall differences in risk calculations using simplified deterministic radiation transport and 3D Monte Carlo radiation transport were found; however, for the galactic cosmic ray ion spectra, compensating errors were observed for the constituent ions, thus exhibiting the need to perform evaluations on a particle
Žukauskaitėa, A; Plukienė, R; Ridikas, D
2007-01-01
Particle accelerators and other high energy facilities produce penetrating ionizing radiation (neutrons and γ-rays) that must be shielded. The objective of this work was to model photon and neutron transport in various materials, usually used as shielding, such as concrete, iron or graphite. Monte Carlo method allows obtaining answers by simulating individual particles and recording some aspects of their average behavior. In this work several nuclear experiments were modeled: AVF 65 (AVF cyclotron of Research Center of Nuclear Physics, Osaka University, Japan) – γ-ray beams (1-10 MeV), HIMAC (heavy-ion synchrotron of the National Institute of Radiological Sciences in Chiba, Japan) and ISIS-800 (ISIS intensive spallation neutron source facility of the Rutherford Appleton laboratory, UK) – high energy neutron (20-800 MeV) transport in iron and concrete. The calculation results were then compared with experimental data.compared with experimental data.
Solar energetic particle anisotropies and insights into particle transport
Energy Technology Data Exchange (ETDEWEB)
Leske, R. A., E-mail: ral@srl.caltech.edu; Cummings, A. C.; Cohen, C. M. S.; Mewaldt, R. A.; Labrador, A. W.; Stone, E. C. [California Institute of Technology, Pasadena, CA 91125 (United States); Wiedenbeck, M. E. [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Christian, E. R.; Rosenvinge, T. T. von [NASA/Goddard Space Flight Center, Greenbelt, MD 20771 (United States)
2016-03-25
As solar energetic particles (SEPs) travel through interplanetary space, their pitch-angle distributions are shaped by the competing effects of magnetic focusing and scattering. Measurements of SEP anisotropies can therefore reveal information about interplanetary conditions such as magnetic field strength, topology, and turbulence levels at remote locations from the observer. Onboard each of the two STEREO spacecraft, the Low Energy Telescope (LET) measures pitch-angle distributions for protons and heavier ions up to iron at energies of about 2-12 MeV/nucleon. Anisotropies observed using LET include bidirectional flows within interplanetary coronal mass ejections, sunward-flowing particles when STEREO was magnetically connected to the back side of a shock, and loss-cone distributions in which particles with large pitch angles underwent magnetic mirroring at an interplanetary field enhancement that was too weak to reflect particles with the smallest pitch angles. Unusual oscillations in the width of a beamed distribution at the onset of the 23 July 2012 SEP event were also observed and remain puzzling. We report LET anisotropy observations at both STEREO spacecraft and discuss their implications for SEP transport, focusing exclusively on the extreme event of 23 July 2012 in which a large variety of anisotropies were present at various times during the event.
A user's manual for the three-dimensional Monte Carlo transport code SPARTAN
International Nuclear Information System (INIS)
Bending, R.C.; Heffer, P.J.H.
1975-09-01
SPARTAN is a general-purpose Monte Carlo particle transport code intended for neutron or gamma transport problems in reactor physics, health physics, shielding, and safety studies. The code used a very general geometry system enabling a complex layout to be described and allows the user to obtain physics data from a number of different types of source library. Special tracking and scoring techniques are used to improve the quality of the results obtained. To enable users to run SPARTAN, brief descriptions of the facilities available in the code are given and full details of data input and job control language, as well as examples of complete calculations, are included. It is anticipated that changes may be made to SPARTAN from time to time, particularly in those parts of the code which deal with physics data processing. The load module is identified by a version number and implementation date, and updates of sections of this manual will be issued when significant changes are made to the code. (author)
Monte Carlo calculations of electron transport on microcomputers
International Nuclear Information System (INIS)
Chung, Manho; Jester, W.A.; Levine, S.H.; Foderaro, A.H.
1990-01-01
In the work described in this paper, the Monte Carlo program ZEBRA, developed by Berber and Buxton, was converted to run on the Macintosh computer using Microsoft BASIC to reduce the cost of Monte Carlo calculations using microcomputers. Then the Eltran2 program was transferred to an IBM-compatible computer. Turbo BASIC and Microsoft Quick BASIC have been used on the IBM-compatible Tandy 4000SX computer. The paper shows the running speed of the Monte Carlo programs on the different computers, normalized to one for Eltran2 on the Macintosh-SE or Macintosh-Plus computer. Higher values refer to faster running times proportionally. Since Eltran2 is a one-dimensional program, it calculates energy deposited in a semi-infinite multilayer slab. Eltran2 has been modified to a two-dimensional program called Eltran3 to computer more accurately the case with a point source, a small detector, and a short source-to-detector distance. The running time of Eltran3 is about twice as long as that of Eltran2 for a similar case
Microwave transport in EBT distribution manifolds using Monte Carlo ray-tracing techniques
International Nuclear Information System (INIS)
Lillie, R.A.; White, T.L.; Gabriel, T.A.; Alsmiller, R.G. Jr.
1983-01-01
Ray tracing Monte Carlo calculations have been carried out using an existing Monte Carlo radiation transport code to obtain estimates of the microsave power exiting the torus coupling links in EPT microwave manifolds. The microwave power loss and polarization at surface reflections were accounted for by treating the microwaves as plane waves reflecting off plane surfaces. Agreement on the order of 10% was obtained between the measured and calculated output power distribution for an existing EBT-S toroidal manifold. A cost effective iterative procedure utilizing the Monte Carlo history data was implemented to predict design changes which could produce increased manifold efficiency and improved output power uniformity
Adaptive sampling method in deep-penetration particle transport problem
International Nuclear Information System (INIS)
Wang Ruihong; Ji Zhicheng; Pei Lucheng
2012-01-01
Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)
Particle modeling of transport of α-ray generated ion clusters in air
International Nuclear Information System (INIS)
Tong, Lizhu; Nanbu, Kenichi; Hirata, Yosuke; Izumi, Mikio; Miyamoto, Yasuaki; Yamaguchi, Hiromi
2006-01-01
A particle model is developed using the test-particle Monte Carlo method to study the transport properties of α-ray generated ion clusters in a flow of air. An efficient ion-molecule collision model is proposed to simulate the collisions between ion and air molecule. The simulations are performed for a steady state of ion transport in a circular pipe. In the steady state, generation of ions is balanced with such losses of ions as absorption of the measuring sensor or pipe wall and disappearance by positive-negative ion recombination. The calculated ion current to the measuring sensor agrees well with the previous measured data. (author)
Particle-In-Cell/Monte Carlo Simulation of Ion Back Bombardment in Photoinjectors
International Nuclear Information System (INIS)
Qiang, Ji; Corlett, John; Staples, John
2009-01-01
In this paper, we report on studies of ion back bombardment in high average current dc and rf photoinjectors using a particle-in-cell/Monte Carlo method. Using H 2 ion as an example, we observed that the ion density and energy deposition on the photocathode in rf guns are order of magnitude lower than that in a dc gun. A higher rf frequency helps mitigate the ion back bombardment of the cathode in rf guns
Monte Carlo collision operator for δF gyrokinetic particle simulation codes
International Nuclear Information System (INIS)
Tessarotto, M.; Zheng, L.J.; White, R.B.
1994-01-01
A δf-weighting scheme is proposed for investigating the gyrokinetic Fokker Planck equation describing the dynamics of e.m. perturbations in a multi-species toroidal magnetoplasma. It is shown that Monte Carlo collision operators can be consistently defined to describe Coulomb binary collisions in such a way to assure conservation of collisional invariants as well as to take into account the full nonlinear particle characteristics
Energy Technology Data Exchange (ETDEWEB)
Baker, Randal Scott [Univ. of Arizona, Tucson, AZ (United States)
1990-01-01
The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S_{N}) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and S_{N} regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S_{N} is well suited for by themselves. The fully coupled Monte Carlo/S_{N} technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an S_{N} calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary S_{N} region. The Monte Carlo and S_{N} regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the S_{N} code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the S_{N} code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating S_{N} calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.
New electron multiple scattering distributions for Monte Carlo transport simulation
Energy Technology Data Exchange (ETDEWEB)
Chibani, Omar (Haut Commissariat a la Recherche (C.R.S.), 2 Boulevard Franz Fanon, Alger B.P. 1017, Alger-Gare (Algeria)); Patau, Jean Paul (Laboratoire de Biophysique et Biomathematiques, Faculte des Sciences Pharmaceutiques, Universite Paul Sabatier, 35 Chemin des Maraichers, 31062 Toulouse cedex (France))
1994-10-01
New forms of electron (positron) multiple scattering distributions are proposed. The first is intended for use in the conditions of validity of the Moliere theory. The second distribution takes place when the electron path is so short that only few elastic collisions occur. These distributions are adjustable formulas. The introduction of some parameters allows impositions of the correct value of the first moment. Only positive and analytic functions were used in constructing the present expressions. This makes sampling procedures easier. Systematic tests are presented and some Monte Carlo simulations, as benchmarks, are carried out. ((orig.))
Liu, Baoshun; Li, Ziqiang; Zhao, Xiujian
2015-02-21
In this research, Monte-Carlo Continuity Random Walking (MC-RW) model was used to study the relation between electron transport and photocatalysis of nano-crystalline (nc) clusters. The effects of defect energy disorder, spatial disorder of material structure, electron density, and interfacial transfer/recombination on the electron transport and the photocatalysis were studied. Photocatalytic activity is defined as 1/τ from a statistical viewpoint with τ being the electron average lifetime. Based on the MC-RW simulation, a clear physical and chemical "picture" was given for the photocatalytic kinetic analysis of nc-clusters. It is shown that the increase of defect energy disorder and material spatial structural disorder, such as the decrease of defect trap number, the increase of crystallinity, the increase of particle size, and the increase of inter-particle connection, can enhance photocatalytic activity through increasing electron transport ability. The increase of electron density increases the electron Fermi level, which decreases the activation energy for electron de-trapping from traps to extending states, and correspondingly increases electron transport ability and photocatalytic activity. Reducing recombination of electrons and holes can increase electron transport through the increase of electron density and then increases the photocatalytic activity. In addition to the electron transport, the increase of probability for electrons to undergo photocatalysis can increase photocatalytic activity through the increase of the electron interfacial transfer speed.
Vrugt, Jasper A.; ter Braak, Cajo J. F.; Diks, Cees G. H.; Schoups, Gerrit
2013-01-01
During the past decades much progress has been made in the development of computer based methods for parameter and predictive uncertainty estimation of hydrologic models. The goal of this paper is twofold. As part of this special anniversary issue we first shortly review the most important historical developments in hydrologic model calibration and uncertainty analysis that has led to current perspectives. Then, we introduce theory, concepts and simulation results of a novel data assimilation scheme for joint inference of model parameters and state variables. This Particle-DREAM method combines the strengths of sequential Monte Carlo sampling and Markov chain Monte Carlo simulation and is especially designed for treatment of forcing, parameter, model structural and calibration data error. Two different variants of Particle-DREAM are presented to satisfy assumptions regarding the temporal behavior of the model parameters. Simulation results using a 40-dimensional atmospheric “toy” model, the Lorenz attractor and a rainfall-runoff model show that Particle-DREAM, P-DREAM(VP) and P-DREAM(IP) require far fewer particles than current state-of-the-art filters to closely track the evolving target distribution of interest, and provide important insights into the information content of discharge data and non-stationarity of model parameters. Our development follows formal Bayes, yet Particle-DREAM and its variants readily accommodate hydrologic signatures, informal likelihood functions or other (in)sufficient statistics if those better represent the salient features of the calibration data and simulation model used.
Directory of Open Access Journals (Sweden)
S. J. Noh
2011-10-01
Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.
Sawtooth driven particle transport in tokamak plasmas
International Nuclear Information System (INIS)
Nicolas, T.
2013-01-01
The radial transport of particles in tokamaks is one of the most stringent issues faced by the magnetic confinement fusion community, because the fusion power is proportional to the square of the pressure, and also because accumulation of heavy impurities in the core leads to important power losses which can lead to a 'radiative collapse'. Sawteeth and the associated periodic redistribution of the core quantities can significantly impact the radial transport of electrons and impurities. In this thesis, we perform numerical simulations of sawteeth using a nonlinear tridimensional magnetohydrodynamic code called XTOR-2F to study the particle transport induced by sawtooth crashes. We show that the code recovers, after the crash, the fine structures of electron density that are observed with fast-sweeping reflectometry on the JET and TS tokamaks. The presence of these structure may indicate a low efficiency of the sawtooth in expelling the impurities from the core. However, applying the same code to impurity profiles, we show that the redistribution is quantitatively similar to that predicted by Kadomtsev's model, which could not be predicted a priori. Hence finally the sawtooth flushing is efficient in expelling impurities from the core. (author) [fr
Haji Ali, Abdul Lateef
2016-01-08
I discuss using single level and multilevel Monte Carlo methods to compute quantities of interests of a stochastic particle system in the mean-field. In this context, the stochastic particles follow a coupled system of Ito stochastic differential equations (SDEs). Moreover, this stochastic particle system converges to a stochastic mean-field limit as the number of particles tends to infinity. I start by recalling the results of applying different versions of Multilevel Monte Carlo (MLMC) for particle systems, both with respect to time steps and the number of particles and using a partitioning estimator. Next, I expand on these results by proposing the use of our recent Multi-index Monte Carlo method to obtain improved convergence rates.
Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric
2017-12-01
This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.
Energy Technology Data Exchange (ETDEWEB)
Millman, D. L. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States); Griesheimer, D. P.; Nease, B. R. [Bechtel Marine Propulsion Corporation, Bertis Atomic Power Laboratory (United States); Snoeyink, J. [Dept. of Computer Science, Univ. of North Carolina at Chapel Hill (United States)
2012-07-01
In this paper we consider a new generalized algorithm for the efficient calculation of component object volumes given their equivalent constructive solid geometry (CSG) definition. The new method relies on domain decomposition to recursively subdivide the original component into smaller pieces with volumes that can be computed analytically or stochastically, if needed. Unlike simpler brute-force approaches, the proposed decomposition scheme is guaranteed to be robust and accurate to within a user-defined tolerance. The new algorithm is also fully general and can handle any valid CSG component definition, without the need for additional input from the user. The new technique has been specifically optimized to calculate volumes of component definitions commonly found in models used for Monte Carlo particle transport simulations for criticality safety and reactor analysis applications. However, the algorithm can be easily extended to any application which uses CSG representations for component objects. The paper provides a complete description of the novel volume calculation algorithm, along with a discussion of the conjectured error bounds on volumes calculated within the method. In addition, numerical results comparing the new algorithm with a standard stochastic volume calculation algorithm are presented for a series of problems spanning a range of representative component sizes and complexities. (authors)
Monte Carlo method for neutron transport calculations in graphics processing units (GPUs)
International Nuclear Information System (INIS)
Pellegrino, Esteban
2011-01-01
Monte Carlo simulation is well suited for solving the Boltzmann neutron transport equation in an inhomogeneous media for complicated geometries. However, routine applications require the computation time to be reduced to hours and even minutes in a desktop PC. The interest in adopting Graphics Processing Units (GPUs) for Monte Carlo acceleration is rapidly growing. This is due to the massive parallelism provided by the latest GPU technologies which is the most promising solution to the challenge of performing full-size reactor core analysis on a routine basis. In this study, Monte Carlo codes for a fixed-source neutron transport problem were developed for GPU environments in order to evaluate issues associated with computational speedup using GPUs. Results obtained in this work suggest that a speedup of several orders of magnitude is possible using the state-of-the-art GPU technologies. (author) [es
Okamoto, Takashi; Kumagawa, Tatsuya; Motoda, Masafumi; Igarashi, Takanori; Nakao, Keisuke
2013-06-01
The reflection and scattering properties of light incident on human skin covered with powder particles have been investigated. A three-layer skin structure with a pigmented area is modeled, and the propagation of light in the skin's layers and in a layer of particles near the skin's surface is simulated using the Monte Carlo method. Assuming that only single scattering of light occurs in the powder layer, the simulation results show that the reflection spectra of light from the skin change with the size of powder particles. The color difference between normal and discolored skin is found to decrease considerably when powder particles with a diameter of approximately 0.25 μm are present near the skin's surface. The effects of the medium surrounding the particles, and the influence of the distribution of particle size (polydispersity), are also examined. It is shown that a surrounding medium with a refractive index close to that of the skin substantially suppresses the extreme spectral changes caused by the powder particles covering the skin surface.
Françoise Benz
2006-01-01
2005-2006 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 27, 28, 29 June 11:00-12:00 - TH Conference Room, bldg. 4 The use of Monte Carlo radiation transport codes in radiation physics and dosimetry F. Salvat Gavalda,Univ. de Barcelona, A. FERRARI, CERN-AB, M. SILARI, CERN-SC Lecture 1. Transport and interaction of electromagnetic radiation F. Salvat Gavalda,Univ. de Barcelona Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interaction models and multiple-scattering theories will be analyzed. Benchmark comparisons of simu...
International Nuclear Information System (INIS)
Maconald, J.L.; Cashwell, E.D.
1978-09-01
The techniques of learning theory and pattern recognition are used to learn splitting surface locations for the Monte Carlo neutron transport code MCN. A study is performed to determine default values for several pattern recognition and learning parameters. The modified MCN code is used to reduce computer cost for several nontrivial example problems
FMCEIR: a Monte Carlo program for solving the stationary neutron and gamma transport equation
International Nuclear Information System (INIS)
Taormina, A.
1978-05-01
FMCEIR is a three-dimensional Monte Carlo program for solving the stationary neutron and gamma transport equation. It is used to study the problem of neutron and gamma streaming in the GCFR and HHT reactor channels. (G.T.H.)
International Nuclear Information System (INIS)
Zazula, J.M.
1983-01-01
The general purpose code BALTORO was written for coupling the three-dimensional Monte-Carlo /MC/ with the one-dimensional Discrete Ordinates /DO/ radiation transport calculations. The quantity of a radiation-induced /neutrons or gamma-rays/ nuclear effect or the score from a radiation-yielding nuclear effect can be analysed in this way. (author)
International Nuclear Information System (INIS)
Picton, D.J.; Harris, R.G.; Randle, K.; Weaver, D.R.
1995-01-01
This paper describes a simple, accurate and efficient technique for the calculation of materials perturbation effects in Monte Carlo photon transport calculations. It is particularly suited to the application for which it was developed, namely the modelling of a dual detector density tool as used in borehole logging. However, the method would be appropriate to any photon transport calculation in the energy range 0.1 to 2 MeV, in which the predominant processes are Compton scattering and photoelectric absorption. The method enables a single set of particle histories to provide results for an array of configurations in which material densities or compositions vary. It can calculate the effects of small perturbations very accurately, but is by no means restricted to such cases. For the borehole logging application described here the method has been found to be efficient for a moderate range of variation in the bulk density (of the order of ±30% from a reference value) or even larger changes to a limited portion of the system (e.g. a low density mudcake of the order of a few tens of mm in thickness). The effective speed enhancement over an equivalent set of individual calculations is in the region of an order of magnitude or more. Examples of calculations on a dual detector density tool are given. It is demonstrated that the method predicts, to a high degree of accuracy, the variation of detector count rates with formation density, and that good results are also obtained for the effects of mudcake layers. An interesting feature of the results is that relative count rates (the ratios of count rates obtained with different configurations) can usually be determined more accurately than the absolute values of the count rates. (orig.)
Response matrix Monte Carlo based on a general geometry local calculation for electron transport
International Nuclear Information System (INIS)
Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.
1991-01-01
A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs
Bayesian parameter estimation in dynamic population model via particle Markov chain Monte Carlo
Directory of Open Access Journals (Sweden)
Meng Gao
2012-12-01
Full Text Available In nature, population dynamics are subject to multiple sources of stochasticity. State-space models (SSMs provide an ideal framework for incorporating both environmental noises and measurement errors into dynamic population models. In this paper, we present a recently developed method, Particle Markov Chain Monte Carlo (Particle MCMC, for parameter estimation in nonlinear SSMs. We use one effective algorithm of Particle MCMC, Particle Gibbs sampling algorithm, to estimate the parameters of a state-space model of population dynamics. The posterior distributions of parameters are derived given the conjugate prior distribution. Numerical simulations showed that the model parameters can be accurately estimated, no matter the deterministic model is stable, periodic or chaotic. Moreover, we fit the model to 16 representative time series from Global Population Dynamics Database (GPDD. It is verified that the results of parameter and state estimation using Particle Gibbs sampling algorithm are satisfactory for a majority of time series. For other time series, the quality of parameter estimation can also be improved, if prior knowledge is constrained. In conclusion, Particle Gibbs sampling algorithm provides a new Bayesian parameter inference method for studying population dynamics.
Monte Carlo transport simulation of velocity undershoot in zinc blende and wurtzite InN
Energy Technology Data Exchange (ETDEWEB)
Wang, Shulong; Liu, Hongxia; Gao, Bo; Zhuo, Qingqing [School of Microelectronics, Key Laboratory of Wide Band-gap Semiconductor Materials and Device, Xidian University, Xi& #x27; an, 710071 (China)
2012-09-15
Velocity undershoot in zinc blende (ZB) and wurtzite (WZ) InN is investigated by ensemble Monte Carlo (EMC) calculation. The results show that velocity undershoot arises from the relatively long energy relaxation time compared with momentum. Monte Carlo transport simulations over wide range of electric fields is presented in the paper. The results show that velocity undershoot impacts the electron transport greatly, compared with velocity overshoot, when the electric field changes quickly with time and space. A comparison study between WZ and ZB InN shows that WZ InN has more advantages in device applications due to its excellent electron transport properties. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)
Monte Carlo calculations of the depth-dose distribution in skin contaminated by hot particles
Energy Technology Data Exchange (ETDEWEB)
Patau, J.-P. (Toulouse-3 Univ., 31 (France))
1991-01-01
Accurate computer programs were developed in order to calculate the spatial distribution of absorbed radiation doses in the skin, near high activity particles (''hot particles''). With a view to ascertaining the reliability of the codes the transport of beta particles was simulated in a complex configuration used for dosimetric measurements: spherical {sup 60}Co sources of 10-1000 {mu}m fastened to an aluminium support with a tissue-equivalent adhesive overlaid with 10 {mu}m thick aluminium foil. Behind it an infinite polystyrene medium including an extrapolation chamber was assumed. The exact energy spectrum of beta emission was sampled. Production and transport of secondary knock-on electrons were also simulated. Energy depositions in polystyrene were calculated with a high spatial resolution. Finally, depth-dose distributions were calculated for hot particles placed on the skin. The calculations will be continued for other radionuclides and for a configuration suited to TLD measurements. (author).
Enhancing hydrologic data assimilation by evolutionary Particle Filter and Markov Chain Monte Carlo
Abbaszadeh, Peyman; Moradkhani, Hamid; Yan, Hongxiang
2018-01-01
Particle Filters (PFs) have received increasing attention by researchers from different disciplines including the hydro-geosciences, as an effective tool to improve model predictions in nonlinear and non-Gaussian dynamical systems. The implication of dual state and parameter estimation using the PFs in hydrology has evolved since 2005 from the PF-SIR (sampling importance resampling) to PF-MCMC (Markov Chain Monte Carlo), and now to the most effective and robust framework through evolutionary PF approach based on Genetic Algorithm (GA) and MCMC, the so-called EPFM. In this framework, the prior distribution undergoes an evolutionary process based on the designed mutation and crossover operators of GA. The merit of this approach is that the particles move to an appropriate position by using the GA optimization and then the number of effective particles is increased by means of MCMC, whereby the particle degeneracy is avoided and the particle diversity is improved. In this study, the usefulness and effectiveness of the proposed EPFM is investigated by applying the technique on a conceptual and highly nonlinear hydrologic model over four river basins located in different climate and geographical regions of the United States. Both synthetic and real case studies demonstrate that the EPFM improves both the state and parameter estimation more effectively and reliably as compared with the PF-MCMC.
Electron transport in radiotherapy using local-to-global Monte Carlo
International Nuclear Information System (INIS)
Svatos, M.M.; Chandler, W.P.; Siantar, C.L.H.; Rathkopf, J.A.; Ballinger, C.T.
1994-09-01
Local-to-Global (L-G) Monte Carlo methods are a way to make three-dimensional electron transport both fast and accurate relative to other Monte Carlo methods. This is achieved by breaking the simulation into two stages: a local calculation done over small geometries having the size and shape of the ''steps'' to be taken through the mesh; and a global calculation which relies on a stepping code that samples the stored results of the local calculation. The increase in speed results from taking fewer steps in the global calculation than required by ordinary Monte Carlo codes and by speeding up the calculation per step. The potential for accuracy comes from the ability to use long runs of detailed codes to compile probability distribution functions (PDFs) in the local calculation. Specific examples of successful Local-to-Global algorithms are given
International Nuclear Information System (INIS)
Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M.
2010-01-01
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within ∼3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
Energy Technology Data Exchange (ETDEWEB)
Garcia-Pareja, S.; Galan, P.; Manzano, F.; Brualla, L.; Lallena, A. M. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' ' Carlos Haya' ' , Avda. Carlos Haya s/n, E-29010 Malaga (Spain); Unidad de Radiofisica Hospitalaria, Hospital Xanit Internacional, Avda. de los Argonautas s/n, E-29630 Benalmadena (Malaga) (Spain); NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Hufelandstr. 55, D-45122 Essen (Germany); Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2010-07-15
Purpose: In this work, the authors describe an approach which has been developed to drive the application of different variance-reduction techniques to the Monte Carlo simulation of photon and electron transport in clinical accelerators. Methods: The new approach considers the following techniques: Russian roulette, splitting, a modified version of the directional bremsstrahlung splitting, and the azimuthal particle redistribution. Their application is controlled by an ant colony algorithm based on an importance map. Results: The procedure has been applied to radiosurgery beams. Specifically, the authors have calculated depth-dose profiles, off-axis ratios, and output factors, quantities usually considered in the commissioning of these beams. The agreement between Monte Carlo results and the corresponding measurements is within {approx}3%/0.3 mm for the central axis percentage depth dose and the dose profiles. The importance map generated in the calculation can be used to discuss simulation details in the different parts of the geometry in a simple way. The simulation CPU times are comparable to those needed within other approaches common in this field. Conclusions: The new approach is competitive with those previously used in this kind of problems (PSF generation or source models) and has some practical advantages that make it to be a good tool to simulate the radiation transport in problems where the quantities of interest are difficult to obtain because of low statistics.
Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling
Kraan, Aafke Christine
2015-01-01
Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β+ emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects. PMID:26217586
Range Verification Methods in Particle Therapy: Underlying Physics and Monte Carlo Modeling.
Kraan, Aafke Christine
2015-01-01
Hadron therapy allows for highly conformal dose distributions and better sparing of organs-at-risk, thanks to the characteristic dose deposition as function of depth. However, the quality of hadron therapy treatments is closely connected with the ability to predict and achieve a given beam range in the patient. Currently, uncertainties in particle range lead to the employment of safety margins, at the expense of treatment quality. Much research in particle therapy is therefore aimed at developing methods to verify the particle range in patients. Non-invasive in vivo monitoring of the particle range can be performed by detecting secondary radiation, emitted from the patient as a result of nuclear interactions of charged hadrons with tissue, including β (+) emitters, prompt photons, and charged fragments. The correctness of the dose delivery can be verified by comparing measured and pre-calculated distributions of the secondary particles. The reliability of Monte Carlo (MC) predictions is a key issue. Correctly modeling the production of secondaries is a non-trivial task, because it involves nuclear physics interactions at energies, where no rigorous theories exist to describe them. The goal of this review is to provide a comprehensive overview of various aspects in modeling the physics processes for range verification with secondary particles produced in proton, carbon, and heavier ion irradiation. We discuss electromagnetic and nuclear interactions of charged hadrons in matter, which is followed by a summary of some widely used MC codes in hadron therapy. Then, we describe selected examples of how these codes have been validated and used in three range verification techniques: PET, prompt gamma, and charged particle detection. We include research studies and clinically applied methods. For each of the techniques, we point out advantages and disadvantages, as well as clinical challenges still to be addressed, focusing on MC simulation aspects.
Mechanism of travelling-wave transport of particles
International Nuclear Information System (INIS)
Kawamoto, Hiroyuki; Seki, Kyogo; Kuromiya, Naoyuki
2006-01-01
Numerical and experimental investigations have been carried out on transport of particles in an electrostatic travelling field. A three-dimensional hard-sphere model of the distinct element method was developed to simulate the dynamics of particles. Forces applied to particles in the model were the Coulomb force, the dielectrophoresis force on polarized dipole particles in a non-uniform field, the image force, gravity and the air drag. Friction and repulsion between particle-particle and particle-conveyer were included in the model to replace initial conditions after mechanical contacts. Two kinds of experiments were performed to confirm the model. One was the measurement of charge of particles that is indispensable to determine the Coulomb force. Charge distribution was measured from the locus of free-fallen particles in a parallel electrostatic field. The averaged charge of the bulk particle was confirmed by measurement with a Faraday cage. The other experiment was measurements of the differential dynamics of particles on a conveyer consisting of parallel electrodes to which a four-phase travelling electrostatic wave was applied. Calculated results agreed with measurements, and the following characteristics were clarified. (1) The Coulomb force is the predominant force to drive particles compared with the other kinds of forces, (2) the direction of particle transport did not always coincide with that of the travelling wave but changed partially. It depended on the frequency of the travelling wave, the particle diameter and the electric field, (3) although some particles overtook the travelling wave at a very low frequency, the motion of particles was almost synchronized with the wave at the low frequency and (4) the transport of some particles was delayed to the wave at medium frequency; the majority of particles were transported backwards at high frequency and particles were not transported but only vibrated at very high frequency
Peristaltic particle transport using the Lattice Boltzmann method
Energy Technology Data Exchange (ETDEWEB)
Connington, Kevin William [Los Alamos National Laboratory; Kang, Qinjun [Los Alamos National Laboratory; Viswanathan, Hari S [Los Alamos National Laboratory; Abdel-fattah, Amr [Los Alamos National Laboratory; Chen, Shiyi [JOHNS HOPKINS UNIV.
2009-01-01
Peristaltic transport refers to a class of internal fluid flows where the periodic deformation of flexible containing walls elicits a non-negligible fluid motion. It is a mechanism used to transport fluid and immersed solid particles in a tube or channel when it is ineffective or impossible to impose a favorable pressure gradient or desirous to avoid contact between the transported mixture and mechanical moving parts. Peristaltic transport occurs in many physiological situations and has myriad industrial applications. We focus our study on the peristaltic transport of a macroscopic particle in a two-dimensional channel using the lattice Boltzmann method. We systematically investigate the effect of variation of the relevant dimensionless parameters of the system on the particle transport. We find, among other results, a case where an increase in Reynolds number can actually lead to a slight increase in particle transport, and a case where, as the wall deformation increases, the motion of the particle becomes non-negative only. We examine the particle behavior when the system exhibits the peculiar phenomenon of fluid trapping. Under these circumstances, the particle may itself become trapped where it is subsequently transported at the wave speed, which is the maximum possible transport in the absence of a favorable pressure gradient. Finally, we analyze how the particle presence affects stress, pressure, and dissipation in the fluid in hopes of determining preferred working conditions for peristaltic transport of shear-sensitive particles. We find that the levels of shear stress are most hazardous near the throat of the channel. We advise that shear-sensitive particles should be transported under conditions where trapping occurs as the particle is typically situated in a region of innocuous shear stress levels.
Implicit Monte Carlo methods and non-equilibrium Marshak wave radiative transport
International Nuclear Information System (INIS)
Lynch, J.E.
1985-01-01
Two enhancements to the Fleck implicit Monte Carlo method for radiative transport are described, for use in transparent and opaque media respectively. The first introduces a spectral mean cross section, which applies to pseudoscattering in transparent regions with a high frequency incident spectrum. The second provides a simple Monte Carlo random walk method for opaque regions, without the need for a supplementary diffusion equation formulation. A time-dependent transport Marshak wave problem of radiative transfer, in which a non-equilibrium condition exists between the radiation and material energy fields, is then solved. These results are compared to published benchmark solutions and to new discrete ordinate S-N results, for both spatially integrated radiation-material energies versus time and to new spatially dependent temperature profiles. Multigroup opacities, which are independent of both temperature and frequency, are used in addition to a material specific heat which is proportional to the cube of the temperature. 7 refs., 4 figs
Monte Carlo simulation of nonlinear reactive contaminant transport in unsaturated porous media
International Nuclear Information System (INIS)
Giacobbo, F.; Patelli, E.
2007-01-01
In the current proposed solutions of radioactive waste repositories, the protective function against the radionuclide water-driven transport back to the biosphere is to be provided by an integrated system of engineered and natural geologic barriers. The occurrence of several nonlinear interactions during the radionuclide migration process may render burdensome the classical analytical-numerical approaches. Moreover, the heterogeneity of the barriers' media forces approximations to the classical analytical-numerical models, thus reducing their fidelity to reality. In an attempt to overcome these difficulties, in the present paper we adopt a Monte Carlo simulation approach, previously developed on the basis of the Kolmogorov-Dmitriev theory of branching stochastic processes. The approach is here extended for describing transport through unsaturated porous media under transient flow conditions and in presence of nonlinear interchange phenomena between the liquid and solid phases. This generalization entails the determination of the functional dependence of the parameters of the proposed transport model from the water content and from the contaminant concentration, which change in space and time during the water infiltration process. The corresponding Monte Carlo simulation approach is verified with respect to a case of nonreactive transport under transient unsaturated flow and to a case of nonlinear reactive transport under stationary saturated flow. Numerical applications regarding linear and nonlinear reactive transport under transient unsaturated flow are reported
Modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program
International Nuclear Information System (INIS)
Moskowitz, B.S.
2000-01-01
This paper describes the modular, object-oriented redesign of a large-scale Monte Carlo neutron transport program. This effort represents a complete 'white sheet of paper' rewrite of the code. In this paper, the motivation driving this project, the design objectives for the new version of the program, and the design choices and their consequences will be discussed. The design itself will also be described, including the important subsystems as well as the key classes within those subsystems
Monte Carlo model of light transport in scintillating fibers and large scintillators
International Nuclear Information System (INIS)
Chakarova, R.
1995-01-01
A Monte Carlo model is developed which simulates the light transport in a scintillator surrounded by a transparent layer with different surface properties. The model is applied to analyse the light collection properties of scintillating fibers and a large scintillator wrapped in aluminium foil. The influence of the fiber interface characteristics on the light yield is investigated in detail. Light output results as well as time distributions are obtained for the large scintillator case. 15 refs, 16 figs
Boltzmann equation and Monte Carlo studies of electron transport in resistive plate chambers
International Nuclear Information System (INIS)
Bošnjaković, D; Petrović, Z Lj; Dujko, S; White, R D
2014-01-01
A multi term theory for solving the Boltzmann equation and Monte Carlo simulation technique are used to investigate electron transport in Resistive Plate Chambers (RPCs) that are used for timing and triggering purposes in many high energy physics experiments at CERN and elsewhere. Using cross sections for electron scattering in C 2 H 2 F 4 , iso-C 4 H 10 and SF 6 as an input in our Boltzmann and Monte Carlo codes, we have calculated data for electron transport as a function of reduced electric field E/N in various C 2 H 2 F 4 /iso-C 4 H 10 /SF 6 gas mixtures used in RPCs in the ALICE, CMS and ATLAS experiments. Emphasis is placed upon the explicit and implicit effects of non-conservative collisions (e.g. electron attachment and/or ionization) on the drift and diffusion. Among many interesting and atypical phenomena induced by the explicit effects of non-conservative collisions, we note the existence of negative differential conductivity (NDC) in the bulk drift velocity component with no indication of any NDC for the flux component in the ALICE timing RPC system. We systematically study the origin and mechanisms for such phenomena as well as the possible physical implications which arise from their explicit inclusion into models of RPCs. Spatially-resolved electron transport properties are calculated using a Monte Carlo simulation technique in order to understand these phenomena. (paper)
Entropic Ratchet transport of interacting active Brownian particles
International Nuclear Information System (INIS)
Ai, Bao-Quan; He, Ya-Feng; Zhong, Wei-Rong
2014-01-01
Directed transport of interacting active (self-propelled) Brownian particles is numerically investigated in confined geometries (entropic barriers). The self-propelled velocity can break thermodynamical equilibrium and induce the directed transport. It is found that the interaction between active particles can greatly affect the ratchet transport. For attractive particles, on increasing the interaction strength, the average velocity first decreases to its minima, then increases, and finally decreases to zero. For repulsive particles, when the interaction is very weak, there exists a critical interaction at which the average velocity is minimal, nearly tends to zero, however, for the strong interaction, the average velocity is independent of the interaction
Guerra, Marta L.
2009-02-23
We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.
Massively parallel Monte Carlo for many-particle simulations on GPUs
International Nuclear Information System (INIS)
Anderson, Joshua A.; Jankowski, Eric; Grubb, Thomas L.; Engel, Michael; Glotzer, Sharon C.
2013-01-01
Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper
International Nuclear Information System (INIS)
Li, Zeguang; Wang, Kan; Zhang, Xisi
2011-01-01
In traditional Monte Carlo method, the material properties in a certain cell are assumed to be constant, but this is no longer applicable in continuous varying materials where the material's nuclear cross-sections vary over the particle's flight path. So, three Monte Carlo methods, including sub stepping method, delta-tracking method and direct sampling method, are discussed in this paper to solve the problems with continuously varying materials. After the verification and comparison of these methods in 1-D models, the basic specialties of these methods are discussed and then we choose the delta-tracking method as the main method to solve the problems with continuously varying materials, especially 3-D problems. To overcome the drawbacks of the original delta-tracking method, an improved delta-tracking method is proposed in this paper to make this method more efficient in solving problems where the material's cross-sections vary sharply over the particle's flight path. To use this method in practical calculation, we implemented the improved delta-tracking method into the 3-D Monte Carlo code RMC developed by Department of Engineering Physics, Tsinghua University. Two problems based on Godiva system were constructed and calculations were made using both improved delta-tracking method and the sub stepping method, and the results proved the effects of improved delta-tracking method. (author)
Model of electronic energy relaxation in the test-particle Monte Carlo method
Energy Technology Data Exchange (ETDEWEB)
Roblin, P.; Rosengard, A. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes d`Enrichissement; Nguyen, T.T. [Compagnie Internationale de Services en Informatique (CISI) - Centre d`Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)
1994-12-31
We previously presented a new test-particle Monte Carlo method (1) (which we call PTMC), an iterative method for solving the Boltzmann equation, and now improved and very well-suited to the collisional steady gas flows. Here, we apply a statistical method, described by Anderson (2), to treat electronic translational energy transfer by a collisional process, to atomic uranium vapor. For our study, only three levels of its multiple energy states are considered: 0,620 cm{sup -1} and an average level grouping upper levels. After presenting two-dimensional results, we apply this model to the evaporation of uranium by electron bombardment and show that the PTMC results, for given initial electronic temperatures, are in good agreement with experimental radial velocity measurements. (author). 12 refs., 1 fig.
Monte Carlo simulations of ultra high vacuum and synchrotron radiation for particle accelerators
AUTHOR|(CDS)2082330; Leonid, Rivkin
With preparation of Hi-Lumi LHC fully underway, and the FCC machines under study, accelerators will reach unprecedented energies and along with it very large amount of synchrotron radiation (SR). This will desorb photoelectrons and molecules from accelerator walls, which contribute to electron cloud buildup and increase the residual pressure - both effects reducing the beam lifetime. In current accelerators these two effects are among the principal limiting factors, therefore precise calculation of synchrotron radiation and pressure properties are very important, desirably in the early design phase. This PhD project shows the modernization and a major upgrade of two codes, Molflow and Synrad, originally written by R. Kersevan in the 1990s, which are based on the test-particle Monte Carlo method and allow ultra-high vacuum and synchrotron radiation calculations. The new versions contain new physics, and are built as an all-in-one package - available to the public. Existing vacuum calculation methods are overvi...
Monte Carlo study in the mechanisms of transport of fast neutrons in various media
International Nuclear Information System (INIS)
Ku, L.
1976-01-01
The life histories of fast neutrons created by the straight Monte Carlo method in various attenuation media were examined. The media studied range from the one with simple, featureless properties (Na) to iron with very complicated cross section structure. The life histories of exceptional neutrons, i.e. those staying very close to the source, or those going very far from the source, were compared with those of the general population. When the exceptional neutrons exploited a particular collision property in a narrow energy band in order to reach a given detector, the method of analyzing Monte Carlo histories was able to provide a clear physical picture and single out the influence of that property on the macroscopic behavior of the neutrons. Two such phenomena were demonstrated by using this technique. In one, transport in a cross section minimum dominates the deep penetration of the neutrons. In such a circumstance most of the spatial transport is accomplished by the traveling at energies in and near the minimum, while little transport occurs at any other energies. The second example involves the effect of inelastic scattering on the low-energy leakage spectra for small bare assemblies. It is shown that, for a small bare iron sphere and for a fission source, the exit current spectrum below 100 keV is extremely sensitive to the details of the inelastic scattering near threshold. It often happened that in some exceptional situations the number of histories available for the analysis was too few to give statistically significant results. The most important conclusion to be drawn here is that the analysis of Monte Carlo histories can provide information on the details of transport mechanisms that is not available through forward or even adjoint deterministic transport calculations. 47 figures, 21 tables
Progress on RMC: a Monte Carlo neutron transport code for reactor analysis
International Nuclear Information System (INIS)
Wang, Kan; Li, Zeguang; She, Ding; Liu, Yuxuan; Xu, Qi; Shen, Huayun; Yu, Ganglin
2011-01-01
This paper presents a new 3-D Monte Carlo neutron transport code named RMC (Reactor Monte Carlo code), specifically intended for reactor physics analysis. This code is being developed by Department of Engineering Physics in Tsinghua University and written in C++ and Fortran 90 language with the latest version of RMC 2.5.0. The RMC code uses the method known as the delta-tracking method to simulate neutron transport, the advantages of which include fast simulation in complex geometries and relatively simple handling of complicated geometrical objects. Some other techniques such as computational-expense oriented method and hash-table method have been developed and implemented in RMC to speedup the calculation. To meet the requirements of reactor analysis, the RMC code has the calculational functions including criticality calculation, burnup calculation and also kinetics simulation. In this paper, comparison calculations of criticality problems, burnup problems and transient problems are carried out using RMC code and other Monte Carlo codes, and the results show that RMC performs quite well in these kinds of problems. Based on MPI, RMC succeeds in parallel computation and represents a high speed-up. This code is still under intensive development and the further work directions are mentioned at the end of this paper. (author)
Directory of Open Access Journals (Sweden)
Chapoutier Nicolas
2017-01-01
Full Text Available In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics. Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.
Chapoutier, Nicolas; Mollier, François; Nolin, Guillaume; Culioli, Matthieu; Mace, Jean-Reynald
2017-09-01
In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics). Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition) has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.
Energy and particle core transport in tokamaks and stellarators compared
Energy Technology Data Exchange (ETDEWEB)
Beurskens, Marc; Angioni, Clemente; Beidler, Craig; Dinklage, Andreas; Fuchert, Golo; Hirsch, Matthias; Puetterich, Thomas; Wolf, Robert [Max-Planck-Institut fuer Plasmaphysik, Greifswald/Garching (Germany)
2016-07-01
The paper discusses expectations for core transport in the Wendelstein 7-X stellarator (W7-X) and presents a comparison to tokamaks. In tokamaks, the neoclassical trapped-particle-driven losses are small and turbulence dominates the energy and particle transport. At reactor relevant low collisionality, the heat transport is limited by ion temperature gradient limited turbulence, clamping the temperature gradient. The particle transport is set by an anomalous inward pinch, yielding peaked profiles. A strong edge pedestal adds to the good confinement properties. In traditional stellarators the 3D geometry cause increased trapped orbit losses. At reactor relevant low collisionality and high temperatures, these neoclassical losses would be well above the turbulent transport losses. The W7-X design minimizes neoclassical losses and turbulent transport can become dominant. Moreover, the separation of regions of bad curvature and that of trapped particle orbits in W7-X may have favourable implications on the turbulent electron heat transport. The neoclassical particle thermodiffusion is outward. Without core particle sources the density profile is flat or even hollow. The presence of a turbulence driven inward anomalous particle pinch in W7-X (like in tokamaks) is an open topic of research.
High-speed evaluation of track-structure Monte Carlo electron transport simulations.
Pasciak, A S; Ford, J R
2008-10-07
There are many instances where Monte Carlo simulation using the track-structure method for electron transport is necessary for the accurate analytical computation and estimation of dose and other tally data. Because of the large electron interaction cross-sections and highly anisotropic scattering behavior, the track-structure method requires an enormous amount of computation time. For microdosimetry, radiation biology and other applications involving small site and tally sizes, low electron energies or high-Z/low-Z material interfaces where the track-structure method is preferred, a computational device called a field-programmable gate array (FPGA) is capable of executing track-structure Monte Carlo electron-transport simulations as fast as or faster than a standard computer can complete an identical simulation using the condensed history (CH) technique. In this paper, data from FPGA-based track-structure electron-transport computations are presented for five test cases, from simple slab-style geometries to radiation biology applications involving electrons incident on endosteal bone surface cells. For the most complex test case presented, an FPGA is capable of evaluating track-structure electron-transport problems more than 500 times faster than a standard computer can perform the same track-structure simulation and with comparable accuracy.
International Nuclear Information System (INIS)
Allam, Kh. A.
2017-01-01
In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3 19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. (authors)
First-passage kinetic Monte Carlo on lattices: Hydrogen transport in lattices with traps
von Toussaint, U.; Schwarz-Selinger, T.; Schmid, K.
2015-08-01
A new algorithm for the diffusion in 2D and 3D discrete simple cubic lattices based on a recently proposed technique, Green-functions or first-passage kinetic Monte Carlo has been developed. It is based on the solutions of appropriately chosen Greens functions, which propagate the diffusing atoms over long distances in one step (superhops). The speed-up of the new approach over standard kinetic Monte Carlo techniques can be orders of magnitude, depending on the problem. Using this new algorithm we simulated recent hydrogen isotope exchange experiments in recrystallized tungsten at 320 K, initially loaded with deuterium. It was found that the observed depth profiles can only be explained with 'active' traps, i.e. traps capable of exchanging atoms with activation energies significantly lower than the actual trap energy. Such a mechanism has so far not been considered in the modeling of hydrogen transport.
Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.
2016-03-01
This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package authored at Oak Ridge National Laboratory. Shift has been developed to scale well from laptops to small computing clusters to advanced supercomputers and includes features such as support for multiple geometry and physics engines, hybrid capabilities for variance reduction methods such as the Consistent Adjoint-Driven Importance Sampling methodology, advanced parallel decompositions, and tally methods optimized for scalability on supercomputing architectures. The scaling studies presented in this paper demonstrate good weak and strong scaling behavior for the implemented algorithms. Shift has also been validated and verified against various reactor physics benchmarks, including the Consortium for Advanced Simulation of Light Water Reactors' Virtual Environment for Reactor Analysis criticality test suite and several Westinghouse AP1000® problems presented in this paper. These benchmark results compare well to those from other contemporary Monte Carlo codes such as MCNP5 and KENO.
Energy Technology Data Exchange (ETDEWEB)
Zychor, I. [Soltan Inst. for Nuclear Studies, Otwock-Swierk (Poland)
1994-12-31
The application of a Monte Carlo method to study a transport in matter of electron and photon beams is presented, especially for electrons with energies up to 18 MeV. The SHOWME Monte Carlo code, a modified version of GEANT3 code, was used on the CONVEX C3210 computer at Swierk. It was assumed that an electron beam is mono directional and monoenergetic. Arbitrary user-defined, complex geometries made of any element or material can be used in calculation. All principal phenomena occurring when electron beam penetrates the matter are taken into account. The use of calculation for a therapeutic electron beam collimation is presented. (author). 20 refs, 29 figs.
The effects of realistic pancake solenoids on particle transport
Gu, X.; Okamura, M.; Pikin, A.; Fischer, W.; Luo, Y.
2011-05-01
Solenoids are widely used to transport or focus particle beams. Usually, they are assumed as being ideal solenoids with a high axial-symmetry magnetic field. Using the Vector Fields Opera program, we modeled asymmetrical solenoids with realistic geometry defects, caused by finite conductor and current jumpers. Their multipole magnetic components were analyzed with the Fourier fit method; we present some possible optimized methods for them. We also discuss the effects of "realistic" solenoids on low energy particle transport. The findings in this paper may be applicable to the lower energy particle transport system design.
Microstripes for transport and separation of magnetic particles
DEFF Research Database (Denmark)
Donolato, Marco; Dalslet, Bjarke Thomas; Hansen, Mikkel Fougt
2012-01-01
applied magnetic fields. We demonstrate the controlled transportation of a large population of particles over several millimeters of distance as well as the spatial separation of two populations of magnetic particles with different magnetophoretic mobilities. The technique can be used for the controlled......We present a simple technique for creating an on-chip magnetic particle conveyor based on exchange-biased permalloy microstripes. The particle transportation relies on an array of stripes with a spacing smaller than their width in conjunction with a periodic sequence of four different externally...
Turbulent transport of large particles in the atmospheric boundary layer
Richter, D. H.; Chamecki, M.
2017-12-01
To describe the transport of heavy dust particles in the atmosphere, assumptions must typically be made in order to connect the micro-scale emission processes with the larger-scale atmospheric motions. In the context of numerical models, this can be thought of as the transport process which occurs between the domain bottom and the first vertical grid point. For example, in the limit of small particles (both low inertia and low settling velocity), theory built upon Monin-Obukhov similarity has proven effective in relating mean dust concentration profiles to surface emission fluxes. For increasing particle mass, however, it becomes more difficult to represent dust transport as a simple extension of the transport of a passive scalar due to issues such as the crossing trajectories effect. This study focuses specifically on the problem of large particle transport and dispersion in the turbulent boundary layer by utilizing direct numerical simulations with Lagrangian point-particle tracking to determine under what, if any, conditions the large dust particles (larger than 10 micron in diameter) can be accurately described in a simplified Eulerian framework. In particular, results will be presented detailing the independent contributions of both particle inertia and particle settling velocity relative to the strength of the surrounding turbulent flow, and consequences of overestimating surface fluxes via traditional parameterizations will be demonstrated.
A kinetic Monte Carlo approach to study fluid transport in pore networks
Apostolopoulou, M.; Day, R.; Hull, R.; Stamatakis, M.; Striolo, A.
2017-10-01
The mechanism of fluid migration in porous networks continues to attract great interest. Darcy's law (phenomenological continuum theory), which is often used to describe macroscopically fluid flow through a porous material, is thought to fail in nano-channels. Transport through heterogeneous and anisotropic systems, characterized by a broad distribution of pores, occurs via a contribution of different transport mechanisms, all of which need to be accounted for. The situation is likely more complicated when immiscible fluid mixtures are present. To generalize the study of fluid transport through a porous network, we developed a stochastic kinetic Monte Carlo (KMC) model. In our lattice model, the pore network is represented as a set of connected finite volumes (voxels), and transport is simulated as a random walk of molecules, which "hop" from voxel to voxel. We simulated fluid transport along an effectively 1D pore and we compared the results to those expected by solving analytically the diffusion equation. The KMC model was then implemented to quantify the transport of methane through hydrated micropores, in which case atomistic molecular dynamic simulation results were reproduced. The model was then used to study flow through pore networks, where it was able to quantify the effect of the pore length and the effect of the network's connectivity. The results are consistent with experiments but also provide additional physical insights. Extension of the model will be useful to better understand fluid transport in shale rocks.
ASYMPTOTICS OF a PARTICLES TRANSPORT PROBLEM
Directory of Open Access Journals (Sweden)
Kuzmina Ludmila Ivanovna
2017-11-01
Full Text Available Subject: a groundwater filtration affects the strength and stability of underground and hydro-technical constructions. Research objectives: the study of one-dimensional problem of displacement of suspension by the flow of pure water in a porous medium. Materials and methods: when filtering a suspension some particles pass through the porous medium, and some of them are stuck in the pores. It is assumed that size distributions of the solid particles and the pores overlap. In this case, the main mechanism of particle retention is a size-exclusion: the particles pass freely through the large pores and get stuck at the inlet of the tiny pores that are smaller than the particle diameter. The concentrations of suspended and retained particles satisfy two quasi-linear differential equations of the first order. To solve the filtration problem, methods of nonlinear asymptotic analysis are used. Results: in a mathematical model of filtration of suspensions, which takes into account the dependence of the porosity and permeability of the porous medium on concentration of retained particles, the boundary between two phases is moving with variable velocity. The asymptotic solution to the problem is constructed for a small filtration coefficient. The theorem of existence of the asymptotics is proved. Analytical expressions for the principal asymptotic terms are presented for the case of linear coefficients and initial conditions. The asymptotics of the boundary of two phases is given in explicit form. Conclusions: the filtration problem under study can be solved analytically.
International Nuclear Information System (INIS)
Lee, Choonsik; Nagaoka, Tomoaki; Lee, Jai-Ki
2006-01-01
Japanese male and female tomographic phantoms, which have been developed for radio-frequency electromagnetic-field dosimetry, were implemented into multi-particle Monte Carlo transport code to evaluate realistic dose distribution in human body exposed to radiation field. Japanese tomographic phantoms, which were developed from the whole body magnetic resonance images of Japanese average adult male and female, were processed as follows to be implemented into general purpose multi-particle Monte Carlo code, MCNPX2.5. Original array size of Japanese male and female phantoms, 320 x 160 x 866 voxels and 320 x 160 x 804 voxels, respectively, were reduced into 320 x 160 x 433 voxels and 320 x 160 x 402 voxels due to the limitation of memory use in MCNPX2.5. The 3D voxel array of the phantoms were processed by using the built-in repeated structure algorithm, where the human anatomy was described by the repeated lattice of tiny cube containing the information of material composition and organ index number. Original phantom data were converted into ASCII file, which can be directly ported into the lattice card of MCNPX2.5 input deck by using in-house code. A total of 30 material compositions obtained from International Commission on Radiation Units and Measurement (ICRU) report 46 were assigned to 54 and 55 organs and tissues in the male and female phantoms, respectively, and imported into the material card of MCNPX2.5 along with the corresponding cross section data. Illustrative calculation of absorbed doses for 26 internal organs and effective dose were performed for idealized broad parallel photon and neutron beams in anterior-posterior irradiation geometry, which is typical for workers at nuclear power plant. The results were compared with the data from other Japanese and Caucasian tomographic phantom, and International Commission on Radiological Protection (ICRP) report 74. The further investigation of the difference in organ dose and effective dose among tomographic
Srna-Monte Carlo codes for proton transport simulation in combined and voxelized geometries
International Nuclear Information System (INIS)
Ilic, R.D.; Lalic, D.; Stankovic, S.J.
2002-01-01
This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D) dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtained through the PETRA and GEANT programs. The simulation of the proton beam characterization by means of the Multi-Layer Faraday Cup and spatial distribution of positron emitters obtained by our program indicate the imminent application of Monte Carlo techniques in clinical practice. (author)
Srna - Monte Carlo codes for proton transport simulation in combined and voxelized geometries
Directory of Open Access Journals (Sweden)
Ilić Radovan D.
2002-01-01
Full Text Available This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtained through the PETRA and GEANT programs. The simulation of the proton beam characterization by means of the Multi-Layer Faraday Cup and spatial distribution of positron emitters obtained by our program indicate the imminent application of Monte Carlo techniques in clinical practice.
Foucart, Francois
2018-04-01
General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.
International Nuclear Information System (INIS)
Bellezzo, Murillo
2014-01-01
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo Method (MCM) has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this thesis, the CUBMC code is presented, a GPU-based MC photon transport algorithm for dose calculation under the Compute Unified Device Architecture (CUDA) platform. The simulation of physical events is based on the algorithm used in PENELOPE, and the cross section table used is the one generated by the MATERIAL routine, also present in PENELOPE code. Photons are transported in voxel-based geometries with different compositions. There are two distinct approaches used for transport simulation. The rst of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon ignores the existence of borders and travels in homogeneous fictitious media. The CUBMC code aims to be an alternative of Monte Carlo simulator code that, by using the capability of parallel processing of graphics processing units (GPU), provide high performance simulations in low cost compact machines, and thus can be applied in clinical cases and incorporated in treatment planning systems for radiotherapy. (author)
Bergmann, Ryan
Graphics processing units, or GPUs, have gradually increased in computational power from the small, job-specific boards of the early 1990s to the programmable powerhouses of today. Compared to more common central processing units, or CPUs, GPUs have a higher aggregate memory bandwidth, much higher floating-point operations per second (FLOPS), and lower energy consumption per FLOP. Because one of the main obstacles in exascale computing is power consumption, many new supercomputing platforms are gaining much of their computational capacity by incorporating GPUs into their compute nodes. Since CPU-optimized parallel algorithms are not directly portable to GPU architectures (or at least not without losing substantial performance), transport codes need to be rewritten to execute efficiently on GPUs. Unless this is done, reactor simulations cannot take full advantage of these new supercomputers. WARP, which can stand for ``Weaving All the Random Particles,'' is a three-dimensional (3D) continuous energy Monte Carlo neutron transport code developed in this work as to efficiently implement a continuous energy Monte Carlo neutron transport algorithm on a GPU. WARP accelerates Monte Carlo simulations while preserving the benefits of using the Monte Carlo Method, namely, very few physical and geometrical simplifications. WARP is able to calculate multiplication factors, flux tallies, and fission source distributions for time-independent problems, and can run in both criticality or fixed source modes. WARP can transport neutrons in unrestricted arrangements of parallelepipeds, hexagonal prisms, cylinders, and spheres. WARP uses an event-based algorithm, but with some important differences. Moving data is expensive, so WARP uses a remapping vector of pointer/index pairs to direct GPU threads to the data they need to access. The remapping vector is sorted by reaction type after every transport iteration using a high-efficiency parallel radix sort, which serves to keep the
Kim, Hyun Suk; Choi, Hong Yeop; Lee, Gyemin; Ye, Sung-Joon; Smith, Martin B; Kim, Geehyun
2017-12-22
This work is to develop a gamma-ray/neutron dual-particle imager, based on rotating modulation collimators (RMC) and pulse shape discrimination (PSD)-capable scintillators, for possible applications on radioactivity monitoring as well as nuclear security and safeguards. A Monte Carlo simulation study was performed to design an RMC system for the dual-particle imaging, and modulation patterns were obtained for gamma-ray and neutron sources on various configurations. We applied an image reconstruction algorithm utilizing the maximum-likelihood expectation maximization (MLEM) method based on the analytical modeling of source-detector configurations, to the Monte Carlo simulation results. Both gamma-ray and neutron source distributions were reconstructed and evaluated in terms of signal-to-noise ratio (SNR), showing viability of developing an RMC-based gamma-ray/neutron dual-particle imager using PSD-capable scintillators. © 2017 IOP Publishing Ltd.
Monte Carlo techniques in radiation therapy
Verhaegen, Frank
2013-01-01
Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...
International Nuclear Information System (INIS)
Pazianotto, Mauricio T.; Carlson, Brett V.; Federico, Claudio A.; Gonzalez, Odair L.
2011-01-01
Neutrons generated by the interaction of cosmic rays with the atmosphere make an important contribution to the dose accumulated in electronic circuits and aircraft crew members at flight altitude. High-energy neutrons are produced in spallation reactions and intranuclear cascade processes by primary cosmic-ray particle interactions with atoms in the atmosphere. These neutrons can produce secondary neutrons and also undergo a moderation process due to atmosphere interactions, resulting in a wider energy spectrum, ranging from thermal energies (0.025 eV) to energies of several hundreds of MeV. The Long-Counter (LC) detector is a widely used neutron detector designed to measure the directional flux of neutrons with about constant response over a wide energy range (thermal to 20 MeV). ). Its calibration process and the determination of its energy response for the wide-energy of cosmic ray induced neutron spectrum is a very difficult process due to the lack of installations with these capabilities. The goal of this study is to assess the behavior of the response of a Long Counter using the Monte Carlo (MC) computational code MCNPX (Monte Carlo N-Particle eXtended). The dependence of the Long Counter response on the angle of incidence, as well as on the neutron energy, will be carefully investigated, compared with the experimental data previously obtained with 241 Am-Be and 252 Cf neutron sources and extended to the neutron spectrum produced by cosmic rays. (Author)
Spatiotemporal Structure of Aeolian Particle Transport on Flat Surface
Niiya, Hirofumi; Nishimura, Kouichi
2017-05-01
We conduct numerical simulations based on a model of blowing snow to reveal the long-term properties and equilibrium state of aeolian particle transport from 10-5 to 10 m above the flat surface. The numerical results are as follows. (i) Time-series data of particle transport are divided into development, relaxation, and equilibrium phases, which are formed by rapid wind response below 10 cm and gradual wind response above 10 cm. (ii) The particle transport rate at equilibrium is expressed as a power function of friction velocity, and the index of 2.35 implies that most particles are transported by saltation. (iii) The friction velocity below 100 µm remains roughly constant and lower than the fluid threshold at equilibrium. (iv) The mean particle speed above 300 µm is less than the wind speed, whereas that below 300 µm exceeds the wind speed because of descending particles. (v) The particle diameter increases with height in the saltation layer, and the relationship is expressed as a power function. Through comparisons with the previously reported random-flight model, we find a crucial problem that empirical splash functions cannot reproduce particle dynamics at a relatively high wind speed.
Transport of suspended particles in turbulent open channel flows
Breugem, W.A.
2012-01-01
Two experiments are performed in order to investigate suspended sediment transport in a turbulent open channel flow. The first experiment used particle image velocimetry (PIV) to measure the fluid velocity with a high spatial resolution, while particle tracking velocimetry (PTV) was used to measure
Modelling of an industrial environment, part 1.: Monte Carlo simulations of photon transport
International Nuclear Information System (INIS)
Kis, Z.; Eged, K.; Meckbach, R.; Voigt, G.
2002-01-01
After a nuclear accident releasing radioactive material into the environment the external exposures may contribute significantly to the radiation exposure of the population (UNSCEAR 1988, 2000). For urban populations the external gamma exposure from radionuclides deposited on the surfaces of the urban-industrial environments yields the dominant contributions to the total dose to the public (Kelly 1987; Jacob and Meckbach 1990). The radiation field is naturally influenced by the environment around the sources. For calculations of the shielding effect of the structures in complex and realistic urban environments Monte Carlo methods turned out to be useful tools (Jacob and Meckbach 1987; Meckbach et al. 1988). Using these methods a complex environment can be set up in which the photon transport can be solved on a reliable way. The accuracy of the methods is in principle limited only by the knowledge of the atomic cross sections and the computational time. Several papers using Monte Carlo results for calculating doses from the external gamma exposures were published (Jacob and Meckbach 1987, 1990; Meckbach et al. 1988; Rochedo et al. 1996). In these papers the Monte Carlo simulations were run in urban environments and for different photon energies. The industrial environment can be defined as such an area where productive and/or commercial activity is carried out. A good example can be a factory or a supermarket. An industrial environment can rather be different from the urban ones as for the types and structures of the buildings and their dimensions. These variations will affect the radiation field of this environment. Hence there is a need to run new Monte Carlo simulations designed specially for the industrial environments
Time-dependent 2-stream particle transport
International Nuclear Information System (INIS)
Corngold, Noel
2015-01-01
Highlights: • We consider time-dependent transport in the 2-stream or “rod” model via an attractive matrix formalism. • After reviewing some classical problems in homogeneous media we discuss transport in materials with whose density may vary. • There we achieve a significant contraction of the underlying Telegrapher’s equation. • We conclude with a discussion of stochastics, treated by the “first-order smoothing approximation.” - Abstract: We consider time-dependent transport in the 2-stream or “rod” model via an attractive matrix formalism. After reviewing some classical problems in homogeneous media we discuss transport in materials whose density may vary. There we achieve a significant contraction of the underlying Telegrapher’s equation. We conclude with a discussion of stochastics, treated by the “first-order smoothing approximation.”
International Nuclear Information System (INIS)
Zazula, J.M.
1984-01-01
This work concerns calculation of a neutron response, caused by a neutron field perturbed by materials surrounding the source or the detector. Solution of a problem is obtained using coupling of the Monte Carlo radiation transport computation for the perturbed region and the discrete ordinates transport computation for the unperturbed system. (author). 62 refs
Fueling profile sensitivities of trapped particle mode transport to TNS
International Nuclear Information System (INIS)
Mense, A.T.; Attenberger, S.E.; Houlberg, W.A.
1977-01-01
A key factor in the plasma thermal behavior is the anticipated existence of dissipative trapped particle modes. A possible scheme for controlling the strength of these modes was found. The scheme involves varying the cold fueling profile. A one dimensional multifluid transport code was used to simulate plasma behavior. A multiregime model for particle and energy transport was incorporated based on pseudoclassical, trapped electron, and trapped ion regimes used elsewhere in simulation of large tokamaks. Fueling profiles peaked toward the plasma edge may provide a means for reducing density-gradient-driven trapped particle modes, thus reducing diffusion and conduction losses
Monte Carlo N-particle simulation of neutron-based sterilisation of anthrax contamination.
Liu, B; Xu, J; Liu, T; Ouyang, X
2012-10-01
To simulate the neutron-based sterilisation of anthrax contamination by Monte Carlo N-particle (MCNP) 4C code. Neutrons are elementary particles that have no charge. They are 20 times more effective than electrons or γ-rays in killing anthrax spores on surfaces and inside closed containers. Neutrons emitted from a (252)Cf neutron source are in the 100 keV to 2 MeV energy range. A 2.5 MeV D-D neutron generator can create neutrons at up to 10(13) n s(-1) with current technology. All these enable an effective and low-cost method of killing anthrax spores. There is no effect on neutron energy deposition on the anthrax sample when using a reflector that is thicker than its saturation thickness. Among all three reflecting materials tested in the MCNP simulation, paraffin is the best because it has the thinnest saturation thickness and is easy to machine. The MCNP radiation dose and fluence simulation calculation also showed that the MCNP-simulated neutron fluence that is needed to kill the anthrax spores agrees with previous analytical estimations very well. The MCNP simulation indicates that a 10 min neutron irradiation from a 0.5 g (252)Cf neutron source or a 1 min neutron irradiation from a 2.5 MeV D-D neutron generator may kill all anthrax spores in a sample. This is a promising result because a 2.5 MeV D-D neutron generator output >10(13) n s(-1) should be attainable in the near future. This indicates that we could use a D-D neutron generator to sterilise anthrax contamination within several seconds.
Premar-2: a Monte Carlo code for radiative transport simulation in atmospheric environments
International Nuclear Information System (INIS)
Cupini, E.
1999-01-01
The peculiarities of the PREMAR-2 code, aimed at radiation transport Monte Carlo simulation in atmospheric environments in the infrared-ultraviolet frequency range, are described. With respect to the previously developed PREMAR code, besides plane multilayers, spherical multilayers and finite sequences of vertical layers, each one with its own atmospheric behaviour, are foreseen in the new code, together with the refraction phenomenon, so that long range, highly slanted paths can now be more faithfully taken into account. A zenithal angular dependence of the albedo coefficient has moreover been introduced. Lidar systems, with spatially independent source and telescope, are allowed again to be simulated, and, in this latest version of the code, sensitivity analyses to be performed. According to this last feasibility, consequences on radiation transport of small perturbations in physical components of the atmospheric environment may be analyze and the related effects on searched results estimated. The availability of a library of physical data (reaction coefficients, phase functions and refraction indexes) is required by the code, providing the essential features of the environment of interest needed of the Monte Carlo simulation. Variance reducing techniques have been enhanced in the Premar-2 code, by introducing, for instance, a local forced collision technique, especially apt to be used in Lidar system simulations. Encouraging comparisons between code and experimental results carried out at the Brasimone Centre of ENEA, have so far been obtained, even if further checks of the code are to be performed [it
Penelope-2006: a code system for Monte Carlo simulation of electron and photon transport
International Nuclear Information System (INIS)
2006-01-01
The computer code system PENELOPE (version 2006) performs Monte Carlo simulation of coupled electron-photon transport in arbitrary materials for a wide energy range, from a few hundred eV to about 1 GeV. Photon transport is simulated by means of the standard, detailed simulation scheme. Electron and positron histories are generated on the basis of a mixed procedure, which combines detailed simulation of hard events with condensed simulation of soft interactions. A geometry package called PENGEOM permits the generation of random electron-photon showers in material systems consisting of homogeneous bodies limited by quadric surfaces, i.e. planes, spheres, cylinders, etc. This report is intended not only to serve as a manual of the PENELOPE code system, but also to provide the user with the necessary information to understand the details of the Monte Carlo algorithm. These proceedings contain the corresponding manual and teaching notes of the PENELOPE-2006 workshop and training course, held on 4-7 July 2006 in Barcelona, Spain. (author)
Status of Monte Carlo dose planning
International Nuclear Information System (INIS)
Mackie, T.R.
1995-01-01
Monte Carlo simulation will become increasing important for treatment planning for radiotherapy. The EGS4 Monte Carlo system, a general particle transport system, has been used most often for simulation tasks in radiotherapy although ETRAN/ITS and MCNP have also been used. Monte Carlo treatment planning requires that the beam characteristics such as the energy spectrum and angular distribution of particles emerging from clinical accelerators be accurately represented. An EGS4 Monte Carlo code, called BEAM, was developed by the OMEGA Project (a collaboration between the University of Wisconsin and the National Research Council of Canada) to transport particles through linear accelerator heads. This information was used as input to simulate the passage of particles through CT-based representations of phantoms or patients using both an EGS4 code (DOSXYZ) and the macro Monte Carlo (MMC) method. Monte Carlo computed 3-D electron beam dose distributions compare well to measurements obtained in simple and complex heterogeneous phantoms. The present drawback with most Monte Carlo codes is that simulation times are slower than most non-stochastic dose computation algorithms. This is especially true for photon dose planning. In the future dedicated Monte Carlo treatment planning systems like Peregrine (from Lawrence Livermore National Laboratory), which will be capable of computing the dose from all beam types, or the Macro Monte Carlo (MMC) system, which is an order of magnitude faster than other algorithms, may dominate the field
Present status of transport code development based on Monte Carlo method
International Nuclear Information System (INIS)
Nakagawa, Masayuki
1985-01-01
The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)
Inward particle transport by plasma collective modes
International Nuclear Information System (INIS)
Antonsen, T.; Coppi, B.; Englade, R.
1979-01-01
A model for the rate of density rise observed when neutral gas is fed into a plasma-containing chamber is presented for regimes where known collisional transport processes do not provide an adequate explanation. A dense layer of cold plasma produced at the edge of the plasma column and the resulting relatively sharp ion temperature gradient, as compared with the local density gradient, can lead to the excitation of electron temperature fluctuations driven by ion drift modes. The net inflow of electrons and ions that is produced by these modes has been included in a one-dimensional transport code used to simulate experiments performed by the Alcator device. The linear and quasi-linear theories of these modes are given for the regimes of interest. The cold-plasma-layer model is also consistent with the presence of an outflow of impurity ions, due to impurity driven modes, that balance the inflow produced by discrete collisions. (author)
Gyrokinetic theory for particle and energy transport in fusion plasmas
Falessi, Matteo Valerio; Zonca, Fulvio
2018-03-01
A set of equations is derived describing the macroscopic transport of particles and energy in a thermonuclear plasma on the energy confinement time. The equations thus derived allow studying collisional and turbulent transport self-consistently, retaining the effect of magnetic field geometry without postulating any scale separation between the reference state and fluctuations. Previously, assuming scale separation, transport equations have been derived from kinetic equations by means of multiple-scale perturbation analysis and spatio-temporal averaging. In this work, the evolution equations for the moments of the distribution function are obtained following the standard approach; meanwhile, gyrokinetic theory has been used to explicitly express the fluctuation induced fluxes. In this way, equations for the transport of particles and energy up to the transport time scale can be derived using standard first order gyrokinetics.
Optimization of FIBMOS Through 2D Silvaco ATLAS and 2D Monte Carlo Particle-based Device Simulations
Kang, J.; He, X.; Vasileska, D.; Schroder, D. K.
2001-01-01
Focused Ion Beam MOSFETs (FIBMOS) demonstrate large enhancements in core device performance areas such as output resistance, hot electron reliability and voltage stability upon channel length or drain voltage variation. In this work, we describe an optimization technique for FIBMOS threshold voltage characterization using the 2D Silvaco ATLAS simulator. Both ATLAS and 2D Monte Carlo particle-based simulations were used to show that FIBMOS devices exhibit enhanced current drive ...
Particle transport in DIII-D plasmas
Kress, Peter; Mordijck, Saskia
2017-10-01
By analyzing the plasma opacity and density evolution during the ELM cycle in DIII-D H-mode plasmas in which the amount of gas fueling was altered, we find evidence for an inward particle pinch at the plasma edge which seems to become more pronounced at higher density. Furthermore, at the plasma edge we find a correlation between the pedestal density and opacity, which measures neutral penetration depth. The changes in edge opacity during an ELM cycle were calculated by using a detailed time history of measured plasma profiles. At the same time, the density evolution during an ELM cycle was investigated. We find that if the edge density increases through an increase in gas fueling, then opacity increases and neutral fueling penetration depth decreases. We also find that density at the top of the pedestal recovers faster following an ELM when the overall density level is higher, leading to a hollow profile inside of the pedestal top. All these results indicate that there must be an inward particle pinch in the pedestal which will be crucial in the fueling of future burning plasma devices. Supported by US DOE DE-SC0007880, DIII-D Grant Number DE-FC02-04ER54698.
Transport effects due to particle erosion mechanisms. [in planetary rings
Durisen, R. H.
1984-01-01
Various processes can erode the surfaces of planetary ring particles. Recent estimates for Saturn's rings suggest that a centimeter-thick surface layer could be eroded from an isolated ring particle in less than 1000 yr by meteoroid impacts alone. The atoms, molecules, and chips ejected from ring particles by erosion will arc across the rings along elliptical orbits. For moderate ring optical depths, ejecta will be absorbed or inelastically scattered upon reintersecting the ring plane. Continuous exchange of ejecta between different ring regions can lead to net radial transport of mass and angular momentum, to changes in particle sizes, and to the buildup of chip regoliths several centimeters deep on the surfaces of ring particles. Because most of the erosional ejecta are not lost but merely exchanged over short distances, the net erosion rate of the surfaces of these ring particles will be much less than that estimated for an isolated particle. Numerical solutions for time-dependent ballistic transport under various assumptions suggest pile-up and spillover effects especially near regions of preexisting high optical depth contrast, such as the inner edges of A and B rings. Global redistribution could be significant over billions of years. Other features in planetary ring systems may be influenced by ballistic transport.
Particle Transport in ECRH Plasmas of the TJ-II; Transporte de Particulas en Plasmas ECRH del TJ-II
Energy Technology Data Exchange (ETDEWEB)
Vargas, V. I.; Lopez-Bruna, D.; Estrada, T.; Guasp, J.; Reynolds, J. M.; Velasco, J. L.; Herranz, J.
2007-07-01
We present a systematic study of particle transport in ECRH plasmas of TJ-II with different densities. The goal is to fi nd particle confinement time and electron diffusivity dependence with line-averaged density. The experimental information consists of electron temperature profiles, T{sub e} (Thomson Scattering TS) and electron density, n{sub e}, (TS and reflectometry) and measured puffing data in stationary discharges. The profile of the electron source, Se, was obtained by the 3D Monte-Carlo code EIRENE. The analysis of particle balance has been done by linking the results of the code EIRENE with the results of a model that reproduces ECRH plasmas in stationary conditions. In the range of densities studied (0.58 {<=}n{sub e}> (10{sup 1}9m{sup -}3) {<=}0.80) there are two regions of confinement separated by a threshold density,
Modeling airflow and particle transport/deposition in pulmonary airways.
Kleinstreuer, Clement; Zhang, Zhe; Li, Zheng
2008-11-30
A review of research papers is presented, pertinent to computer modeling of airflow as well as nano- and micron-size particle deposition in pulmonary airway replicas. The key modeling steps are outlined, including construction of suitable airway geometries, mathematical description of the air-particle transport phenomena and computer simulation of micron and nanoparticle depositions. Specifically, diffusion-dominated nanomaterial deposits on airway surfaces much more uniformly than micron particles of the same material. This may imply different toxicity effects. Due to impaction and secondary flows, micron particles tend to accumulate around the carinal ridges and to form "hot spots", i.e., locally high concentrations which may lead to tumor developments. Inhaled particles in the size range of 20nm< or =dp< or =3microm may readily reach the deeper lung region. Concerning inhaled therapeutic particles, optimal parameters for mechanical drug-aerosol targeting of predetermined lung areas can be computed, given representative pulmonary airways.
3D electro-thermal Monte Carlo study of transport in confined silicon devices
Mohamed, Mohamed Y.
The simultaneous explosion of portable microelectronics devices and the rapid shrinking of microprocessor size have provided a tremendous motivation to scientists and engineers to continue the down-scaling of these devices. For several decades, innovations have allowed components such as transistors to be physically reduced in size, allowing the famous Moore's law to hold true. As these transistors approach the atomic scale, however, further reduction becomes less probable and practical. As new technologies overcome these limitations, they face new, unexpected problems, including the ability to accurately simulate and predict the behavior of these devices, and to manage the heat they generate. This work uses a 3D Monte Carlo (MC) simulator to investigate the electro-thermal behavior of quasi-one-dimensional electron gas (1DEG) multigate MOSFETs. In order to study these highly confined architectures, the inclusion of quantum correction becomes essential. To better capture the influence of carrier confinement, the electrostatically quantum-corrected full-band MC model has the added feature of being able to incorporate subband scattering. The scattering rate selection introduces quantum correction into carrier movement. In addition to the quantum effects, scaling introduces thermal management issues due to the surge in power dissipation. Solving these problems will continue to bring improvements in battery life, performance, and size constraints of future devices. We have coupled our electron transport Monte Carlo simulation to Aksamija's phonon transport so that we may accurately and efficiently study carrier transport, heat generation, and other effects at the transistor level. This coupling utilizes anharmonic phonon decay and temperature dependent scattering rates. One immediate advantage of our coupled electro-thermal Monte Carlo simulator is its ability to provide an accurate description of the spatial variation of self-heating and its effect on non
Simeonov, Yuri; Weber, Uli; Penchev, Petar; Ringbæk, Toke Printz; Schuy, Christoph; Brons, Stephan; Engenhart-Cabillic, Rita; Bliedtner, Jens; Zink, Klemens
2017-08-11
The purpose of this work was to design and manufacture a 3D range-modulator for scanned particle therapy. The modulator is intended to create a highly conformal dose distribution with only one fixed energy, simultaneously reducing considerably the treatment time. As a proof of concept, a 3D range-modulator was developed for a spherical target volume with a diameter of 5 cm, placed at a depth of 25 cm in a water phantom. It consists of a large number of thin pins with a well-defined shape and different lengths to modulate the necessary shift of the Bragg peak. The 3D range-modulator was manufactured with a rapid prototyping technique. The FLUKA Monte Carlo package was used to simulate the modulating effect of the 3D range-modulator and the resulting dose distribution. For that purpose, a special user routine was implemented to handle its complex geometrical contour. Additionally, FLUKA was extended with the capability of intensity modulated scanning. To validate the simulation results, dose measurements were carried out at the Heidelberg Ion Beam Therapy Center with a 400.41 MeV/u 12 C beam. The high resolution dosimetric measurements show a good agreement between simulated and measured dose distributions. Irradiation of the monoenergetic raster plan took 3 s, which is approximately 20 times shorter than a comparable plan with 16 different energies. The combination of only one energy and a 3D range-modulator leads to a tremendous decrease in irradiation time. 'Interplay effects', typical for moving targets and pencil beam scanning, can be immensely reduced or disappear completely, making the delivery of a homogeneous dose to moving targets more reliable. Combining high dose conformity, very good homogeneity and extremely short irradiation times, the 3D range-modulator is considered to become a clinically applicable method for very fast treatment of lung tumours.
SAM-CE, Time-Dependent 3-D Neutron Transport, Gamma Transport in Complex Geometry by Monte-Carlo
International Nuclear Information System (INIS)
2003-01-01
1 - Nature of physical problem solved: The SAM-CE system comprises two Monte Carlo codes, SAM-F and SAM-A. SAM-F supersedes the forward Monte Carlo code, SAM-C. SAM-A is an adjoint Monte Carlo code designed to calculate the response due to fields of primary and secondary gamma radiation. The SAM-CE system is a FORTRAN Monte Carlo computer code designed to solve the time-dependent neutron and gamma-ray transport equations in complex three-dimensional geometries. SAM-CE is applicable for forward neutron calculations and for forward as well as adjoint primary gamma-ray calculations. In addition, SAM-CE is applicable for the gamma-ray stage of the coupled neutron-secondary gamma ray problem, which may be solved in either the forward or the adjoint mode. Time-dependent fluxes, and flux functionals such as dose, heating, count rates, etc., are calculated as functions of energy, time and position. Multiple scoring regions are permitted and these may be either finite volume regions or point detectors or both. Other scores of interest, e.g., collision and absorption densities, etc., are also made. 2 - Method of solution: A special feature of SAM-CE is its use of the 'combinatorial geometry' technique which affords the user geometric capabilities exceeding those available with other commonly used geometric packages. All nuclear interaction cross section data (derived from the ENDF for neutrons and from the UNC-format library for gamma-rays) are tabulated in point energy meshes. The energy meshes for neutrons are internally derived, based on built-in convergence criteria and user- supplied tolerances. Tabulated neutron data for each distinct nuclide are in unique and appropriate energy meshes. Both resolved and unresolved resonance parameters from ENDF data files are treated automatically, and extremely precise and detailed descriptions of cross section behaviour is permitted. Such treatment avoids the ambiguities usually associated with multi-group codes, which use flux
Drift Wave Test Particle Transport in Reversed Shear Profile
International Nuclear Information System (INIS)
Horton, W.; Park, H.B.; Kwon, J.M.; Stronzzi, D.; Morrison, P.J.; Choi, D.I.
1998-01-01
Drift wave maps, area preserving maps that describe the motion of charged particles in drift waves, are derived. The maps allow the integration of particle orbits on the long time scale needed to describe transport. Calculations using the drift wave maps show that dramatic improvement in the particle confinement, in the presence of a given level and spectrum of E x B turbulence, can occur for q(r)-profiles with reversed shear. A similar reduction in the transport, i.e. one that is independent of the turbulence, is observed in the presence of an equilibrium radial electric field with shear. The transport reduction, caused by the combined effects of radial electric field shear and both monotonic and reversed shear magnetic q-profiles, is also investigated
Rectified transport of a ring containing self-propelled particles
Huang, Xiao-Qun; Liao, Jing-Jing; Ai, Bao-Quan
2018-02-01
Rectified transport of a ring containing self-propelled particles is numerically investigated in a two-dimensional herringbone potential. It is found that the ring powered by active particles can be rectified in the asymmetric potential and the direction of the transport is determined by the asymmetry of the potential. The ring radius can strongly affect the transport and the role of the radius on the average velocity depends on the profile of the potential and the self-propulsion speed. There exist optimal values of the parameters (the self-propulsion and the modulation parameter of the potential) at which the average velocity takes its maximal value. The average velocity decreases monotonously with increase of the parameters (the translational diffusion, the rotational diffusion and the particle number).
International Nuclear Information System (INIS)
Yang, M.; Liu, F.; Smallwood, G.J.
2004-01-01
Laser-Induced Incandescence (LII) technique has been widely used to measure soot volume fraction and primary particle size in flames and engine exhaust. Currently there is lack of quantitative understanding of the shielding effect of aggregated soot particles on its conduction heat loss rate to the surrounding gas. The conventional approach for this problem would be the application of the Monte Carlo (MC) method. This method is based on simulation of the trajectories of individual molecules and calculation of the heat transfer at each of the molecule/molecule collisions and the molecule/particle collisions. As the first step toward calculating the heat transfer between a soot aggregate and the surrounding gas, the Direct Simulation Monte Carlo (DSMC) method was used in this study to calculate the heat transfer rate between a single spherical aerosol particle and its cooler surrounding gas under different conditions of temperature, pressure, and the accommodation coefficient. A well-defined and simple hard sphere model was adopted to describe molecule/molecule elastic collisions. A combination of the specular reflection and completely diffuse reflection model was used to consider molecule/particle collisions. The results obtained by DSMC are in good agreement with the known analytical solution of heat transfer rate for an isolated, motionless sphere in the free-molecular regime. Further the DSMC method was applied to calculate the heat transfer in the transition regime. Our present DSMC results agree very well with published DSMC data. (author)
Automating methods to improve precision in Monte-Carlo event generation for particle colliders
Energy Technology Data Exchange (ETDEWEB)
Gleisberg, Tanju
2008-07-01
The subject of this thesis was the development of tools for the automated calculation of exact matrix elements, which are a key for the systematic improvement of precision and confidence for theoretical predictions. Part I of this thesis concentrates on the calculations of cross sections at tree level. A number of extensions have been implemented in the matrix element generator AMEGIC++, namely new interaction models such as effective loop-induced couplings of the Higgs boson with massless gauge bosons, required for a number of channels for the Higgs boson search at LHC and anomalous gauge couplings, parameterizing a number of models beyond th SM. Further a special treatment to deal with complicated decay chains of heavy particles has been constructed. A significant effort went into the implementation of methods to push the limits on particle multiplicities. Two recursive methods have been implemented, the Cachazo-Svrcek-Witten recursion and the colour dressed Berends-Giele recursion. For the latter the new module COMIX has been added to the SHERPA framework. The Monte-Carlo phase space integration techniques have been completely revised, which led to significantly reduced statistical error estimates when calculating cross sections and a greatly improved unweighting efficiency for the event generation. Special integration methods have been developed to cope with the newly accessible final states. The event generation framework SHERPA directly benefits from those new developments, improving the precision and the efficiency. Part II was addressed to the automation of QCD calculations at next-to-leading order. A code has been developed, that, for the first time fully automates the real correction part of a NLO calculation. To calculate the correction for a m-parton process obeying the Catani-Seymour dipole subtraction method the following components are provided: 1. the corresponding m+1-parton tree level matrix elements, 2. a number dipole subtraction terms to remove
Monte Carlo simulations of elemental imaging using the neutron-associated particle technique.
Abel, Michael R; Nie, Linda H
2018-02-06
The purpose of this study is to develop and employ a Monte Carlo (MC) simulation model of associated particle neutron elemental imaging (APNEI) in order to determine the three-dimensional (3D) imaging resolution of such a system by examining relevant physical and technological parameters and to thereby begin to explore the range of clinical applicability of APNEI to fields such as medical diagnostics, intervention, and etiological research. The presented APNEI model was defined in MCNP by a Gaussian-distributed and isotropic surface source emitting deuterium + deuterium (DD) neutrons, iron as the target element, nine iron-containing voxels (1 cm 3 volume each) arranged in a 3-by-3 array as the interrogated volume of interest, and finally, by high-purity germanium (HPGe) gamma-ray detectors anterior and posterior to the 9-voxel array. The MCNP f8 pulse height tally was employed in conjunction with the PTRAC particle tracking function to not only determine the signal acquired from iron inelastic scatter gamma-rays but also to quantitate each of the nine target voxels' contribution to the overall iron signal - each detected iron inelastic scatter gamma-ray being traced to the source neutron which incited its emission. With the spatial, vector, and timing information of the series of events for each relevant neutron history as collected by PTRAC, realistic grayscale images of the distribution of iron concentration in the 9-voxel array were simulated in both the projective and depth dimensions. With an overall 225 ps timing resolution, 6.25 mm 2 imaging plate pixels assumed to have well localized scintillation, and a DD neutron, Gaussian-distributed source spot with a diameter of 2 mm, projective and depth resolutions of imaging resolution offered by APNEI of target elements such as iron lends itself to potential applications in disease diagnosis and treatment planning (high resolution) as well as to ordnance and contraband detection (low resolution). However
Collective transport of Lennard–Jones particles through one-dimensional periodic potentials
International Nuclear Information System (INIS)
He Jian-hui; Wen Jia-le; Chen Pei-rong; Zheng Dong-qin; Zhong Wei-rong
2017-01-01
The surrounding media in which transport occurs contains various kinds of fields, such as particle potentials and external potentials. One of the important questions is how elements work and how position and momentum are redistributed in the diffusion under these conditions. For enriching Fick’s law, ordinary non-equilibrium statistical physics can be used to understand the complex process. This study attempts to discuss particle transport in the one-dimensional channel under external potential fields. Two kinds of potentials—the potential well and barrier—which do not change the potential in total, are built during the diffusion process. There are quite distinct phenomena because of the different one-dimensional periodic potentials. By the combination of a Monte Carlo method and molecular dynamics, we meticulously explore why an external potential field impacts transport by the subsection and statistical method. Besides, one piece of evidence of the Maxwell velocity distribution is confirmed under the assumption of local equilibrium. The simple model is based on the key concept that relates the flux to sectional statistics of position and momentum and could be referenced in similar transport problems. (rapid communication)
Computational methods for two-phase flow and particle transport
Lee, Wen Ho
2013-01-01
This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.
Interactive design environment transportation channel of relativistic charged particle beams
Osadchuk, I. O.; Averyanov, G. P.; Budkin, V. A.
2017-01-01
Considered a modern implementation of a computer environment for the design of channels of transportation of high-energy charged particle beams. The environment includes a software package for the simulation of the dynamics of charged particles in the channel, operating means for changing parameters of the channel, the elements channel optimization and processing of the output characteristics of the beam with the graphical output the main output parameters.
Space applications of the MITS electron-photon Monte Carlo transport code system
International Nuclear Information System (INIS)
Kensek, R.P.; Lorence, L.J.; Halbleib, J.A.; Morel, J.E.
1996-01-01
The MITS multigroup/continuous-energy electron-photon Monte Carlo transport code system has matured to the point that it is capable of addressing more realistic three-dimensional adjoint applications. It is first employed to efficiently predict point doses as a function of source energy for simple three-dimensional experimental geometries exposed to simulated uniform isotropic planar sources of monoenergetic electrons up to 4.0 MeV. Results are in very good agreement with experimental data. It is then used to efficiently simulate dose to a detector in a subsystem of a GPS satellite due to its natural electron environment, employing a relatively complex model of the satellite. The capability for survivability analysis of space systems is demonstrated, and results are obtained with and without variance reduction
TOPIC: a debugging code for torus geometry input data of Monte Carlo transport code
International Nuclear Information System (INIS)
Iida, Hiromasa; Kawasaki, Hiromitsu.
1979-06-01
TOPIC has been developed for debugging geometry input data of the Monte Carlo transport code. the code has the following features: (1) It debugs the geometry input data of not only MORSE-GG but also MORSE-I capable of treating torus geometry. (2) Its calculation results are shown in figures drawn by Plotter or COM, and the regions not defined or doubly defined are easily detected. (3) It finds a multitude of input data errors in a single run. (4) The input data required in this code are few, so that it is readily usable in a time sharing system of FACOM 230-60/75 computer. Example TOPIC calculations in design study of tokamak fusion reactors (JXFR, INTOR-J) are presented. (author)
CAD-Based Monte Carlo Neutron Transport KSTAR Analysis for KSTAR
Seo, Geon Ho; Choi, Sung Hoon; Shim, Hyung Jin
2017-09-01
The Monte Carlo (MC) neutron transport analysis for a complex nuclear system such as fusion facility may require accurate modeling of its complicated geometry. In order to take advantage of modeling capability of the computer aided design (CAD) system for the MC neutronics analysis, the Seoul National University MC code, McCARD, has been augmented with a CAD-based geometry processing module by imbedding the OpenCASCADE CAD kernel. In the developed module, the CAD geometry data are internally converted to the constructive solid geometry model with help of the CAD kernel. An efficient cell-searching algorithm is devised for the void space treatment. The performance of the CAD-based McCARD calculations are tested for the Korea Superconducting Tokamak Advanced Research device by comparing with results of the conventional MC calculations using a text-based geometry input.
Monte Carlo photon transport on shared memory and distributed memory parallel processors
International Nuclear Information System (INIS)
Martin, W.R.; Wan, T.C.; Abdel-Rahman, T.S.; Mudge, T.N.; Miura, K.
1987-01-01
Parallelized Monte Carlo algorithms for analyzing photon transport in an inertially confined fusion (ICF) plasma are considered. Algorithms were developed for shared memory (vector and scalar) and distributed memory (scalar) parallel processors. The shared memory algorithm was implemented on the IBM 3090/400, and timing results are presented for dedicated runs with two, three, and four processors. Two alternative distributed memory algorithms (replication and dispatching) were implemented on a hypercube parallel processor (1 through 64 nodes). The replication algorithm yields essentially full efficiency for all cube sizes; with the 64-node configuration, the absolute performance is nearly the same as with the CRAY X-MP. The dispatching algorithm also yields efficiencies above 80% in a large simulation for the 64-processor configuration
GPU-based high performance Monte Carlo simulation in neutron transport
International Nuclear Information System (INIS)
Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A.
2009-01-01
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)
GPU-based high performance Monte Carlo simulation in neutron transport
Energy Technology Data Exchange (ETDEWEB)
Heimlich, Adino; Mol, Antonio C.A.; Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil). Lab. de Inteligencia Artificial Aplicada], e-mail: cmnap@ien.gov.br
2009-07-01
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in neutron transport simulation by Monte Carlo method. To accomplish that, GPU- and CPU-based (single and multicore) approaches were developed and applied to a simple, but time-consuming problem. Comparisons demonstrated that the GPU-based approach is about 15 times faster than a parallel 8-core CPU-based approach also developed in this work. (author)
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
International Nuclear Information System (INIS)
Smith, L.M.; Hochstedler, R.D.
1997-01-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code)
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
Smith, L. M.; Hochstedler, R. D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).
Yu, Leiming; Nina-Paravecino, Fanny; Kaeli, David; Fang, Qianqian
2018-01-01
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strategies are developed to obtain efficient simulations using multiple central processing units and GPUs. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Parallelizing an electron transport Monte Carlo simulator (MOCASIN 2.0)
International Nuclear Information System (INIS)
Schwetman, H.; Burdick, S.
1988-01-01
Electron transport simulators are tools for studying electrical properties of semiconducting materials and devices. As demands for modeling more complex devices and new materials have emerged, so have demands for more processing power. This paper documents a project to convert an electron transport simulator (MOCASIN 2.0) to a parallel processing environment. In addition to describing the conversion, the paper presents PPL, a parallel programming version of C running on a Sequent multiprocessor system. In timing tests, models that simulated the movement of 2,000 particles for 100 time steps were executed on ten processors, with a parallel efficiency of over 97%
Particle Acceleration and Fractional Transport in Turbulent Reconnection
Isliker, Heinz; Pisokas, Theophilos; Vlahos, Loukas; Anastasiadis, Anastasios
2017-11-01
We consider a large-scale environment of turbulent reconnection that is fragmented into a number of randomly distributed unstable current sheets (UCSs), and we statistically analyze the acceleration of particles within this environment. We address two important cases of acceleration mechanisms when particles interact with the UCS: (a) electric field acceleration and (b) acceleration by reflection at contracting islands. Electrons and ions are accelerated very efficiently, attaining an energy distribution of power-law shape with an index 1-2, depending on the acceleration mechanism. The transport coefficients in energy space are estimated from test-particle simulation data, and we show that the classical Fokker-Planck (FP) equation fails to reproduce the simulation results when the transport coefficients are inserted into it and it is solved numerically. The cause for this failure is that the particles perform Levy flights in energy space, while the distributions of the energy increments exhibit power-law tails. We then use the fractional transport equation (FTE) derived by Isliker et al., whose parameters and the order of the fractional derivatives are inferred from the simulation data, and solving the FTE numerically, we show that the FTE successfully reproduces the kinetic energy distribution of the test particles. We discuss in detail the analysis of the simulation data and the criteria that allow one to judge the appropriateness of either an FTE or a classical FP equation as a transport model.
Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem
Rahmalia, Dinita
2017-08-01
Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.
Charged particles transport in one-dimensional finite systems
International Nuclear Information System (INIS)
Muthukrishnan, G.; Santhanam, K.; Gopinath, D.V.
1977-01-01
A semi-analytical technique for the charged particle transport in one-dimensional finite media is developed which can be applied to multi-energy multi-region systems with arbitrary degree of anisotropy in scattering. For this purpose the transport equation is cast in the form of coupled integral equations separating spatial and energy-angle transmission. The spatial transmission is evaluated using discrete ordinate representation in space, energy and direction cosine for the particle source and flux. The collision integral is evaluated using discrete ordinate representation in energy and legendre polynomial approximation in the direction cosine. A computer code based on the above formulation is described
ITS, TIGER System of Coupled Electron Photon Transport by Monte-Carlo
International Nuclear Information System (INIS)
Halbleib, J.A.; Mehlhorn, T.A.; Young, M.F.
1996-01-01
1 - Description of program or function: ITS permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/ photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. 2 - Method of solution: Through a machine-portable utility that emulates the basic features of the CDC UPDATE processor, the user selects one of eight codes for running on a machine of one of four (at least) major vendors. With the ITS-3.0 release the PSR-0245/UPEML package is included to perform these functions. The ease with which this utility is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is maximized by employing the best available cross sections and sampling distributions, and the most complete physical model for describing the production and transport of the electron/ photon cascade from 1.0 GeV down to 1.0 keV. Flexibility of construction permits the codes to be tailored to specific applications and the capabilities of the codes to be extended to more complex applications through update procedures. 3 - Restrictions on the complexity of the problem: - Restrictions and/or limitations for ITS depend upon the local operating system
Srna-Monte Carlo codes for proton transport simulation in combined and voxelized geometries
Ilic, R D; Stankovic, S J
2002-01-01
This paper describes new Monte Carlo codes for proton transport simulations in complex geometrical forms and in materials of different composition. The SRNA codes were developed for three dimensional (3D) dose distribution calculation in proton therapy and dosimetry. The model of these codes is based on the theory of proton multiple scattering and a simple model of compound nucleus decay. The developed package consists of two codes: SRNA-2KG and SRNA-VOX. The first code simulates proton transport in combined geometry that can be described by planes and second order surfaces. The second one uses the voxelized geometry of material zones and is specifically adopted for the application of patient computer tomography data. Transition probabilities for both codes are given by the SRNADAT program. In this paper, we will present the models and algorithms of our programs, as well as the results of the numerical experiments we have carried out applying them, along with the results of proton transport simulation obtaine...
ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.
Energy Technology Data Exchange (ETDEWEB)
Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William
2008-04-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.
Transport Phenomena of Solid Particles in Pulsatile Pipe Flow
Directory of Open Access Journals (Sweden)
Hitoshi Fujimoto
2010-01-01
Full Text Available The transportation mechanism of single solid particles in pulsating water flow in a vertical pipe was investigated by means of videography and numerical simulations. The trajectories of alumina particles were observed experimentally by stereo videography. The particle diameter was 3 mm or 5 mm, and the pipe diameter was 18 mm or 22 mm. The frequency of flow pulsation was less than or equal to 6.67 Hz. It was found that the critical minimum water flux at which the particle can be transported upward depended on the pulsating pattern. Two types of numerical simulations were conducted, namely, one-dimensional simulations for tracking the vertical motion of the solid particles and two-dimensional simulations of the pulsating pipe flows in an axisymmetric coordinate system. The computer simulations of axisymmetric pipe flows revealed that the time-averaged radial velocity profile of water in the pulsating flows was very different from that in steady pipe flows. The motion of the particles is discussed in detail for a better understanding of the physics of the transport phenomena.
Solar energetic particles: Acceleration and transport
Cliver, Edward W.
2000-06-01
This paper reviews highlights of the 26th ICRC in the area of acceleration and propagation of solar energetic particles (SEPs). New results on SEP charge state and composition, a lively topic during the Conference, are covered in an accompanying paper by Klecker. I begin with a brief historical review of the field to provide context for the key advances/developments on SEP acceleration/propagation presented in Salt Lake City. These include: (1) the use of gamma-ray emissions as diagnostics of the acceleration process(es) and probes of the interaction region; (2) the observation of ~10 GeV (or higher) protons for the 6 November 1997 ground level event by the Milagrito experiment; (3) observations of coronal Moreton waves as ``smoking pistols'' of shock acceleration/injection of SEPs; (4) an investigation of the role of proton event spectra in the current ``two-class'' picture of SEP events; (5) an analysis of the Gnevyshev Gap in SEP activity; (6) a Ulysses-based determination of the dependence of SEP mean free path on radial distance from the Sun and on heliographic latitude, and (7) an examination of the dissipation range in the power spectrum of interplanetary magnetic field fluctuations. I conclude with a discussion of new instrumentation (e.g., Milagro, HESSI) and a look to the expected level of SEP activity for the approaching maximum of solar cycle 23. .
Particle Tracking Model and Abstraction of Transport Processes
International Nuclear Information System (INIS)
Robinson, B.
2000-01-01
The purpose of the transport methodology and component analysis is to provide the numerical methods for simulating radionuclide transport and model setup for transport in the unsaturated zone (UZ) site-scale model. The particle-tracking method of simulating radionuclide transport is incorporated into the FEHM computer code and the resulting changes in the FEHM code are to be submitted to the software configuration management system. This Analysis and Model Report (AMR) outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the unsaturated zone at Yucca Mountain. In addition, methods for determining colloid-facilitated transport parameters are outlined for use in the Total System Performance Assessment (TSPA) analyses. Concurrently, process-level flow model calculations are being carrier out in a PMR for the unsaturated zone. The computer code TOUGH2 is being used to generate three-dimensional, dual-permeability flow fields, that are supplied to the Performance Assessment group for subsequent transport simulations. These flow fields are converted to input files compatible with the FEHM code, which for this application simulates radionuclide transport using the particle-tracking algorithm outlined in this AMR. Therefore, this AMR establishes the numerical method and demonstrates the use of the model, but the specific breakthrough curves presented do not necessarily represent the behavior of the Yucca Mountain unsaturated zone
Energy Technology Data Exchange (ETDEWEB)
Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.
International Nuclear Information System (INIS)
Baumann, K; Weber, U; Simeonov, Y; Zink, K
2015-01-01
Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system
Transient fluctuation relations for time-dependent particle transport
Altland, Alexander; de Martino, Alessandro; Egger, Reinhold; Narozhny, Boris
2010-09-01
We consider particle transport under the influence of time-varying driving forces, where fluctuation relations connect the statistics of pairs of time-reversed evolutions of physical observables. In many “mesoscopic” transport processes, the effective many-particle dynamics is dominantly classical while the microscopic rates governing particle motion are of quantum-mechanical origin. We here employ the stochastic path-integral approach as an optimal tool to probe the fluctuation statistics in such applications. Describing the classical limit of the Keldysh quantum nonequilibrium field theory, the stochastic path integral encapsulates the quantum origin of microscopic particle exchange rates. Dynamically, it is equivalent to a transport master equation which is a formalism general enough to describe many applications of practical interest. We apply the stochastic path integral to derive general functional fluctuation relations for current flow induced by time-varying forces. We show that the successive measurement processes implied by this setup do not put the derivation of quantum fluctuation relations in jeopardy. While in many cases the fluctuation relation for a full time-dependent current profile may contain excessive information, we formulate a number of reduced relations, and demonstrate their application to mesoscopic transport. Examples include the distribution of transmitted charge, where we show that the derivation of a fluctuation relation requires the combined monitoring of the statistics of charge and work.
Ballarini, F.; Biaggi, M.; De Biaggi, L.; Ferrari, A.; Ottolenghi, A.; Panzarasa, A.; Paretzke, H. G.; Pelliccioni, M.; Sala, P.; Scannicchio, D.; Zankl, M.
2004-01-01
Distributions of absorbed dose and DNA clustered damage yields in various organs and tissues following the October 1989 solar particle event (SPE) were calculated by coupling the FLUKA Monte Carlo transport code with two anthropomorphic phantoms (a mathematical model and a voxel model), with the main aim of quantifying the role of the shielding features in modulating organ doses. The phantoms, which were assumed to be in deep space, were inserted into a shielding box of variable thickness and material and were irradiated with the proton spectra of the October 1989 event. Average numbers of DNA lesions per cell in different organs were calculated by adopting a technique already tested in previous works, consisting of integrating into "condensed-history" Monte Carlo transport codes - such as FLUKA - yields of radiobiological damage, either calculated with "event-by-event" track structure simulations, or taken from experimental works available in the literature. More specifically, the yields of "Complex Lesions" (or "CL", defined and calculated as a clustered DNA damage in a previous work) per unit dose and DNA mass (CL Gy -1 Da -1) due to the various beam components, including those derived from nuclear interactions with the shielding and the human body, were integrated in FLUKA. This provided spatial distributions of CL/cell yields in different organs, as well as distributions of absorbed doses. The contributions of primary protons and secondary hadrons were calculated separately, and the simulations were repeated for values of Al shielding thickness ranging between 1 and 20 g/cm 2. Slight differences were found between the two phantom types. Skin and eye lenses were found to receive larger doses with respect to internal organs; however, shielding was more effective for skin and lenses. Secondary particles arising from nuclear interactions were found to have a minor role, although their relative contribution was found to be larger for the Complex Lesions than for
Yang, Y. M.; Bednarz, B.
2013-02-01
Following the proposal by several groups to integrate magnetic resonance imaging (MRI) with radiation therapy, much attention has been afforded to examining the impact of strong (on the order of a Tesla) transverse magnetic fields on photon dose distributions. The effect of the magnetic field on dose distributions must be considered in order to take full advantage of the benefits of real-time intra-fraction imaging. In this investigation, we compared the handling of particle transport in magnetic fields between two Monte Carlo codes, EGSnrc and Geant4, to analyze various aspects of their electromagnetic transport algorithms; both codes are well-benchmarked for medical physics applications in the absence of magnetic fields. A water-air-water slab phantom and a water-lung-water slab phantom were used to highlight dose perturbations near high- and low-density interfaces. We have implemented a method of calculating the Lorentz force in EGSnrc based on theoretical models in literature, and show very good consistency between the two Monte Carlo codes. This investigation further demonstrates the importance of accurate dosimetry for MRI-guided radiation therapy (MRIgRT), and facilitates the integration of a ViewRay MRIgRT system in the University of Wisconsin-Madison's Radiation Oncology Department.
A concurrent vector-based steering framework for particle transport
Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
2014-01-01
High Energy Physics has traditionally been a technology - limited science that has pushed the boundaries of both the detectors collecting the information about the particles and the computing infrastructure processing this information. However, since a few years the increase in computing power comes in the form of increased parallelism at all levels, and High Energy Physics has now to optimise its code to take advantage of the new architectures, including GPUs and hybrid systems. One of the primary targets for optimisation is the particle transport code used to simulate the detector response, as it is largely experiment independent and one of the most demanding applications in terms of CPU resources . The Geant Vector Prototype project aims to explore innovative designs in particle transport aimed at obtaining maximal performance on the new architectures. This paper describes the current status of the project and its future perspectives. In particular we describe how the present design tries to expose the par...
Yamada, Toshishige
The transport properties of a lateral surface superlattice, a two-dimensional (2D) electron system with a superposed 2D periodic potential, are studied with a molecular dynamics Monte Carlo technique. Excellent numerical energy conservation is achieved by adopting a predictor -corrector algorithm to integrate the equations of motion. With increasing 2D potential amplitude, electrons show a transition from a mobile phase to an immobile phase where the radial distribution function has characteristic peaks, indicating the beginning of the long-range ordering of the electrons in the potential minima. The velocity autocorrelation function shows a 2D plasma oscillation in the mobile phase, while in the immobile phase the classical oscillation at the bottom of the potential well is observed. Raising the temperature improves the transport since electrons are released from the constraint of the 2D potential and Coulomb potential. The conductance as a function of the magnetic field is not a simple decreasing function but has a structure with several local conductance minima. This structure is attributed to the correlated circular electron motion, and the reminiscence of the classical pinning orbits in the pinball machine model for a 2D antidot array.
International Nuclear Information System (INIS)
Karlsson, J.K.H.; Linden, P.
1997-01-01
The neutron transport in a bare, cylindrical and homogeneous reactor, with and without the presence of a central partially inserted control rod, has been simulated by using a Monte Carlo transport code. The behaviour of both the flux and current in this system have been investigated. We have found that the flux and especially the current are strongly affected by the presence of the control rod in its close vicinity. The results indicate the feasibility to identify the position and especially the tip of the rod from the flux and current. Further, the direction to the rod can be found from the current vector. The information content regarding the position of the rod, in both the neutron flux and the current, decays strongly as a function of distance and it is dependent on the size of the rod. In our model, the practical range over which the flux or current can be a useful indicator of the position of the tip of the rod is about 10-15 cm for a rod with a diameter of 2 cm. The practical range for identification of the position of the rod is greater for a rod of larger diameter
Penelope - a code system for Monte Carlo simulation of electron and photon transport
International Nuclear Information System (INIS)
2003-01-01
Radiation is used in many applications of modern technology. Its proper handling requires competent knowledge of the basic physical laws governing its interaction with matter. To ensure its safe use, appropriate tools for predicting radiation fields and doses, as well as pertinent regulations, are required. One area of radiation physics that has received much attention concerns electron-photon transport in matter. PENELOPE is a modern, general-purpose Monte Carlo tool for simulating the transport of electrons and photons, which is applicable for arbitrary materials and in a wide energy range. PENELOPE provides quantitative guidance for many practical situations and techniques, including electron and X-ray spectroscopies, electron microscopy and microanalysis, biophysics, dosimetry, medical diagnostics and radiotherapy, as well as radiation damage and shielding. These proceedings contain the extensively revised teaching notes of the second workshop/training course on PENELOPE held in 2003, along with a detailed description of the improved physic models, numerical algorithms and structure of the code system. (author)
Full-dispersion Monte Carlo simulation of phonon transport in micron-sized graphene nanoribbons
Energy Technology Data Exchange (ETDEWEB)
Mei, S., E-mail: smei4@wisc.edu; Knezevic, I., E-mail: knezevic@engr.wisc.edu [Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Maurer, L. N. [Department of Physics, University of Wisconsin-Madison, Madison, Wisconsin 53706 (United States); Aksamija, Z. [Department of Electrical and Computer Engineering, University of Massachusetts-Amherst, Amherst, Massachusetts 01003 (United States)
2014-10-28
We simulate phonon transport in suspended graphene nanoribbons (GNRs) with real-space edges and experimentally relevant widths and lengths (from submicron to hundreds of microns). The full-dispersion phonon Monte Carlo simulation technique, which we describe in detail, involves a stochastic solution to the phonon Boltzmann transport equation with the relevant scattering mechanisms (edge, three-phonon, isotope, and grain boundary scattering) while accounting for the dispersion of all three acoustic phonon branches, calculated from the fourth-nearest-neighbor dynamical matrix. We accurately reproduce the results of several experimental measurements on pure and isotopically modified samples [S. Chen et al., ACS Nano 5, 321 (2011);S. Chen et al., Nature Mater. 11, 203 (2012); X. Xu et al., Nat. Commun. 5, 3689 (2014)]. We capture the ballistic-to-diffusive crossover in wide GNRs: room-temperature thermal conductivity increases with increasing length up to roughly 100 μm, where it saturates at a value of 5800 W/m K. This finding indicates that most experiments are carried out in the quasiballistic rather than the diffusive regime, and we calculate the diffusive upper-limit thermal conductivities up to 600 K. Furthermore, we demonstrate that calculations with isotropic dispersions overestimate the GNR thermal conductivity. Zigzag GNRs have higher thermal conductivity than same-size armchair GNRs, in agreement with atomistic calculations.
Monte Carlo simulations for plasma physics
International Nuclear Information System (INIS)
Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.
2000-07-01
Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)
Particle transport methods for LWR dosimetry developed by the Penn State transport theory group
International Nuclear Information System (INIS)
Haghighat, A.; Petrovic, B.
1997-01-01
This paper reviews advanced particle transport theory methods developed by the Penn State Transport Theory Group (PSTTG) over the past several years. These methods have been developed in response to increasing needs for accuracy of results and for three-dimensional modeling of nuclear systems
Zimmerman, George B.
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
International Nuclear Information System (INIS)
Zimmerman, G.B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials. copyright 1997 American Institute of Physics
International Nuclear Information System (INIS)
Zimmerman, George B.
1997-01-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved 50X in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials
STUDI PEMODELAN DAN PERHITUNGAN TRANSPORT MONTE CARLO DALAM TERAS HTR PEBBLE BED
Directory of Open Access Journals (Sweden)
Zuhair .
2013-01-01
Full Text Available Konsep sistem energi VHTR baik yang berbahan bakar pebble (VHTR pebble bed maupun blok prismatik (VHTR prismatik menarik perhatian fisikawan reaktor nuklir. Salah satu kelebihan teknologi bahan bakar bola adalah menawarkan terobosan teknologi pengisian bahan bakar tanpa harus menghentikan produksi listrik. Selain itu, partikel bahan bakar pebble dengan kernel uranium oksida (UO2 atau uranium oksikarbida (UCO yang dibalut TRISO dan pelapisan silikon karbida (SiC dianggap sebagai opsi utama dengan pertimbangan performa tinggi pada burn-up bahan bakar dan temperatur tinggi. Makalah ini mendiskusikan pemodelan dan perhitungan transport Monte Carlo dalam teras HTR pebble bed. HTR pebble bed adalah reaktor berpendingin gas temperatur tinggi dan bermoderator grafit dengan kemampuan kogenerasi. Perhitungan dikerjakan dengan program MCNP5 pada temperatur 1200 K. Pustaka data nuklir energi kontinu ENDF/B-V dan ENDF/B-VI dimanfaatkan untuk melengkapi analisis. Hasil perhitungan secara keseluruhan menunjukkan konsistensi dengan nilai keff yang hampir sama untuk pustaka data nuklir yang digunakan. Pustaka ENDF/B-VI (66c selalu memproduksi keff lebih besar dibandingkan ENDF/B-V (50c maupun ENDF/B-VI (60c dengan bias kurang dari 0,25%. Kisi BCC memprediksi keff hampir selalu lebih kecil daripada kisi lainnya, khususnya FCC. Nilai keff kisi BCC lebih dekat dengan kisi FCC dengan bias kurang dari 0,19% sedangkan dengan kisi SH bias perhitungannya kurang dari 0,22%. Fraksi packing yang sedikit berbeda (BCC= 61%, SH= 60,459% tidak membuat bias perhitungan menjadi berbeda jauh. Estimasi keff ketiga model kisi menyimpulkan bahwa model BCC lebih bisa diadopsi dalam perhitungan HTR pebble bed dibandingkan model FCC dan SH. Verifikasi hasil estimasi ini perlu dilakukan dengan simulasi Monte Carlo atau bahkan program deterministik lainnya guna optimisasi perhitungan teras reaktor temperatur tinggi. Kata-kunci: kernel, TRISO, bahan bakar pebble, HTR pebble bed
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)
2014-08-15
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom
International Nuclear Information System (INIS)
Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H.
2014-08-01
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)
Propulsion and hydrodynamic particle transport of magnetically twisted colloidal ribbons
Massana-Cid, Helena; Martinez-Pedrero, Fernando; Navarro-Argemí, Eloy; Pagonabarraga, Ignacio; Tierno, Pietro
2017-10-01
We describe a method to trap, transport and release microscopic particles in a viscous fluid using the hydrodynamic flow field generated by a magnetically propelled colloidal ribbon. The ribbon is composed of ferromagnetic microellipsoids that arrange with their long axis parallel to each other, a configuration that is energetically favorable due to their permanent magnetic moments. We use an external precessing magnetic field to torque the anisotropic particles forming the ribbon, and to induce propulsion of the entire structure due to the hydrodynamic coupling with the close substrate. The propulsion speed of the ribbon can be controlled by varying the driving frequency, or the amplitude of the precessing field. The latter parameter is also used to reduce the average inter particle distance and to induce the twisting of the ribbon due to the increase in the attraction between the rotating ellipsoids. Furthermore, non magnetic particles are attracted or repelled with the hydrodynamic flow field generated by the propelling ribbon. The proposed method may be used in channel free microfluidic applications, where the precise trapping and transport of functionalized particles via non invasive magnetic fields is required.
International Nuclear Information System (INIS)
Talley, T.L.; Evans, F.
1988-01-01
Prior work demonstrated the importance of nuclear scattering to fusion product energy deposition in hot plasmas. This suggests careful examination of nuclear physics details in burning plasma simulations. An existing Monte Carlo fast ion transport code is being expanded to be a test bed for this examination. An initial extension, the energy deposition of fast alpha particles in a hot deuterium plasma, is reported. The deposition times and deposition ranges are modified by allowing nuclear scattering. Up to 10% of the initial alpha particle energy is carried to greater ranges and times by the more mobile recoil deuterons. 4 refs., 5 figs., 2 tabs
International Nuclear Information System (INIS)
Mercier, B.; Meurant, G.; Tassart, J.
1985-04-01
The description of the equations in the fluid frame has been done recently. A simplification of the collision term is obtained, but the streaming term now has to include angular deviation and the Doppler shift. We choose the latter description which is more convenient for our purpose. We introduce some notations and recall some facts about stochastic kernels and the Monte-Carlo method. We show how to apply the Monte-Carlo method to a transport equation with an arbitrary streaming term; in particular we show that the track length estimator is unbiased. We review some properties of the radiation hydrodynamics equations, and show how energy conservation is obtained. Then, we apply the Monte-Carlo method explained in section 2 to the particular case of the transfer equation in the fluid frame. Finally, we describe a physical example and give some numerical results
Momentum and particle transport in a nonhomogenous canopy
Gould, Andrew W.
Turbulent particle transport through the air plays an important role in the life cycle of many plant pathogens. In this study, data from a field experiment was analyzed to explore momentum and particle transport within a grape vineyard. The overall goal of these experiments was to understand how the architecture of a sparse agricultural canopy interacts with turbulent flow and ultimately determines the dispersion of airborne fungal plant pathogens. Turbulence in the vineyard canopy was measured using an array of four sonic anemometers deployed at heights z/H 0.4, 0.9, 1.45, and 1.95 where z is the height of the each sonic and H is the canopy height. In addition to turbulence measurements from the sonic anemometers, particle dispersion was measured using inert particles with the approximate size and density of powdery mildew spores and a roto-rod impaction trap array. Measurements from the sonic anemometers demonstrate that first and second order statistics of the wind field are dependent on wind direction orientation with respect to vineyard row direction. This dependence is a result of wind channeling which transfers energy between the velocity components when the wind direction is not aligned with the rows. Although the winds have a strong directional dependence, spectra analysis indicates that the structure of the turbulent flow is not fundamentally altered by the interaction between wind direction and row direction. Examination of a limited number of particle release events indicates that the wind turning and channeling observed in the momentum field impacts particle dispersion. For row-aligned flow, particle dispersion in the direction normal to the flow is decreased relative to the plume spread predicted by a standard Gaussian plume model. For flow that is not aligned with the row direction, the plume is found to rotate in the same manner as the momentum field.
KIM, Steady-State Transport for Fixed Source in 2-D Thermal Reactor by Monte-Carlo
International Nuclear Information System (INIS)
Cupini, E.; De Matteis, A.; Simonini, R.
1980-01-01
1 - Description of problem or function: KIM (K-infinite Monte Carlo) is a program which solves the steady-state linear transport equation for a fixed-source problem (or, by successive fixed-source runs, for the eigenvalue problem) in a two-dimensional infinite thermal reactor lattice. The main quantities computed in some broad energy groups are the following: - Fluxes and cross sections averaged over the region (i.e. a space portion that can be unconnected but contains everywhere the same homogeneous material), grouping of regions, the whole element. - Average absorption and fission rates per nuclide. - Average flux, absorption and production distributions versus energy. 2 - Method of solution: Monte Carlo simulation is used by tracing particle histories from fission birth down through the resonance region until absorption in the thermal range. The program is organised in three sections for fast, epithermal and thermal simulation, respectively; each section implements a particular model for both numerical techniques and cross section representation (energy groups in the fast section, groups or resonance parameters in the epithermal section, points in the thermal section). During slowing down (energy above 1 eV) nuclei are considered as stationary, with the exception of some resonance nuclei whose spacing between resonances is much greater than the resonance width. The Doppler broadening of s-wave resonances of these nuclides is taken into account by computing cross sections at the current neutron energy and at the temperature of the nucleus hit. During thermalization (energy below 1 eV) the thermal motion of some nuclides is also considered, by exploiting scattering kernels provided by the library for light water, heavy water and oxygen at several temperatures. KIM includes splitting and Russian roulette. A characteristic feature of the program is its approach to the lattice geometry. In fact, besides the usual continuous treatment of the geometry using the well
Particle acceleration, transport and turbulence in cosmic and heliospheric physics
Matthaeus, W.
1992-01-01
In this progress report, the long term goals, recent scientific progress, and organizational activities are described. The scientific focus of this annual report is in three areas: first, the physics of particle acceleration and transport, including heliospheric modulation and transport, shock acceleration and galactic propagation and reacceleration of cosmic rays; second, the development of theories of the interaction of turbulence and large scale plasma and magnetic field structures, as in winds and shocks; third, the elucidation of the nature of magnetohydrodynamic turbulence processes and the role such turbulence processes might play in heliospheric, galactic, cosmic ray physics, and other space physics applications.
3D Monte Carlo model of optical transport in laser-irradiated cutaneous vascular malformations
Majaron, Boris; Milanič, Matija; Jia, Wangcun; Nelson, J. S.
2010-11-01
We have developed a three-dimensional Monte Carlo (MC) model of optical transport in skin and applied it to analysis of port wine stain treatment with sequential laser irradiation and intermittent cryogen spray cooling. Our MC model extends the approaches of the popular multi-layer model by Wang et al.1 to three dimensions, thus allowing treatment of skin inclusions with more complex geometries and arbitrary irradiation patterns. To overcome the obvious drawbacks of either "escape" or "mirror" boundary conditions at the lateral boundaries of the finely discretized volume of interest (VOI), photons exiting the VOI are propagated in laterally infinite tissue layers with appropriate optical properties, until they loose all their energy, escape into the air, or return to the VOI, but the energy deposition outside of the VOI is not computed and recorded. After discussing the selection of tissue parameters, we apply the model to analysis of blood photocoagulation and collateral thermal damage in treatment of port wine stain (PWS) lesions with sequential laser irradiation and intermittent cryogen spray cooling.
Comparison of some popular Monte Carlo solution for proton transportation within pCT problem
International Nuclear Information System (INIS)
Evseev, Ivan; Assis, Joaquim T. de; Yevseyeva, Olga; Hormaza, Joel M.
2007-01-01
The proton transport in matter is described by the Boltzmann kinetic equation for the proton flux density. This equation, however, does not have a general analytical solution. Some approximate analytical solutions have been developed within a number of significant simplifications. Alternatively, the Monte Carlo simulations are widely used. Current work is devoted to the discussion of the proton energy spectra obtained by simulation with SRIM2006, GEANT4 and MCNPX packages. The simulations have been performed considering some further applications of the obtained results in computed tomography with proton beam (pCT). Thus the initial and outgoing proton energies (3 / 300 MeV) as well as the thickness of irradiated target (water and aluminum phantoms within 90% of the full range for a given proton beam energy) were considered in the interval of values typical for pCT applications. One from the most interesting results of this comparison is that while the MCNPX spectra are in a good agreement with analytical description within Fokker-Plank approximation and the GEANT4 simulated spectra are slightly shifted from them the SRIM2006 simulations predict a notably higher mean energy loss for protons. (author)
Criticality coefficient calculation for a small PWR using Monte Carlo Transport Code
Energy Technology Data Exchange (ETDEWEB)
Trombetta, Debora M.; Su, Jian, E-mail: dtrombetta@nuclear.ufrj.br, E-mail: sujian@nuclear.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Chirayath, Sunil S., E-mail: sunilsc@tamu.edu [Department of Nuclear Engineering and Nuclear Security Science and Policy Institute, Texas A and M University, TX (United States)
2015-07-01
Computational models of reactors are increasingly used to predict nuclear reactor physics parameters responsible for reactivity changes which could lead to accidents and losses. In this work, preliminary results for criticality coefficient calculation using the Monte Carlo transport code MCNPX were presented for a small PWR. The computational modeling developed consists of the core with fuel elements, radial reflectors, and control rods inside a pressure vessel. Three different geometries were simulated, a single fuel pin, a fuel assembly and the core, with the aim to compare the criticality coefficients among themselves.The criticality coefficients calculated were: Doppler Temperature Coefficient, Coolant Temperature Coefficient, Coolant Void Coefficient, Power Coefficient, and Control Rod Worth. The coefficient values calculated by the MCNP code were compared with literature results, showing good agreement with reference data, which validate the computational model developed and allow it to be used to perform more complex studies. Criticality Coefficient values for the three simulations done had little discrepancy for almost all coefficients investigated, the only exception was the Power Coefficient. Preliminary results presented show that simple modelling as a fuel assembly can describe changes at almost all the criticality coefficients, avoiding the need of a complex core simulation. (author)
Recent advances in neutral particle transport methods and codes
International Nuclear Information System (INIS)
Azmy, Y.Y.
1996-01-01
An overview of ORNL's three-dimensional neutral particle transport code, TORT, is presented. Special features of the code that make it invaluable for large applications are summarized for the prospective user. Advanced capabilities currently under development and installation in the production release of TORT are discussed; they include: multitasking on Cray platforms running the UNICOS operating system; Adjacent cell Preconditioning acceleration scheme; and graphics codes for displaying computed quantities such as the flux. Further developments for TORT and its companion codes to enhance its present capabilities, as well as expand its range of applications are disucssed. Speculation on the next generation of neutron particle transport codes at ORNL, especially regarding unstructured grids and high order spatial approximations, are also mentioned
Li, Jun
2013-09-01
We present a single-particle Lennard-Jones (L-J) model for CO2 and N2. Simplified L-J models for other small polyatomic molecules can be obtained following the methodology described herein. The phase-coexistence diagrams of single-component systems computed using the proposed single-particle models for CO2 and N2 agree well with experimental data over a wide range of temperatures. These diagrams are computed using the Markov Chain Monte Carlo method based on the Gibbs-NVT ensemble. This good agreement validates the proposed simplified models. That is, with properly selected parameters, the single-particle models have similar accuracy in predicting gas-phase properties as more complex, state-of-the-art molecular models. To further test these single-particle models, three binary mixtures of CH4, CO2 and N2 are studied using a Gibbs-NPT ensemble. These results are compared against experimental data over a wide range of pressures. The single-particle model has similar accuracy in the gas phase as traditional models although its deviation in the liquid phase is greater. Since the single-particle model reduces the particle number and avoids the time-consuming Ewald summation used to evaluate Coulomb interactions, the proposed model improves the computational efficiency significantly, particularly in the case of high liquid density where the acceptance rate of the particle-swap trial move increases. We compare, at constant temperature and pressure, the Gibbs-NPT and Gibbs-NVT ensembles to analyze their performance differences and results consistency. As theoretically predicted, the agreement between the simulations implies that Gibbs-NVT can be used to validate Gibbs-NPT predictions when experimental data is not available. © 2013 Elsevier Inc.
Gyrokinetics Simulation of Energetic Particle Turbulence and Transport
Energy Technology Data Exchange (ETDEWEB)
Diamond, Patrick H.
2011-09-21
Progress in research during this year elucidated the physics of precession resonance and its interaction with radial scattering to form phase space density granulations. Momentum theorems for drift wave-zonal flow systems involving precession resonance were derived. These are directly generalizable to energetic particle modes. A novel nonlinear, subcritical growth mechanism was identified, which has now been verified by simulation. These results strengthen the foundation of our understanding of transport in burning plasmas
Gyrokinetics Simulation of Energetic Particle Turbulence and Transport
International Nuclear Information System (INIS)
Diamond, Patrick H.
2011-01-01
Progress in research during this year elucidated the physics of precession resonance and its interaction with radial scattering to form phase space density granulations. Momentum theorems for drift wave-zonal flow systems involving precession resonance were derived. These are directly generalizable to energetic particle modes. A novel nonlinear, subcritical growth mechanism was identified, which has now been verified by simulation. These results strengthen the foundation of our understanding of transport in burning plasmas
The particle in the spider's web: transport through biological hydrogels.
Witten, Jacob; Ribbeck, Katharina
2017-06-22
Biological hydrogels such as mucus, extracellular matrix, biofilms, and the nuclear pore have diverse functions and compositions, but all act as selectively permeable barriers to the diffusion of particles. Each barrier has a crosslinked polymeric mesh that blocks penetration of large particles such as pathogens, nanotherapeutics, or macromolecules. These polymeric meshes also employ interactive filtering, in which affinity between solutes and the gel matrix controls permeability. Interactive filtering affects the transport of particles of all sizes including peptides, antibiotics, and nanoparticles and in many cases this filtering can be described in terms of the effects of charge and hydrophobicity. The concepts described in this review can guide strategies to exploit or overcome gel barriers, particularly for applications in diagnostics, pharmacology, biomaterials, and drug delivery.
On the Way to Future's High Energy Particle Physics Transport Code
Bíró, Gábor; Futó, Endre
2015-01-01
High Energy Physics (HEP) needs a huge amount of computing resources. In addition data acquisition, transfer, and analysis require a well developed infrastructure too. In order to prove new physics disciplines it is required to higher the luminosity of the accelerator facilities, which produce more-and-more data in the experimental detectors. Both testing new theories and detector R&D are based on complex simulations. Today have already reach that level, the Monte Carlo detector simulation takes much more time than real data collection. This is why speed up of the calculations and simulations became important in the HEP community. The Geant Vector Prototype (GeantV) project aims to optimize the most-used particle transport code applying parallel computing and to exploit the capabilities of the modern CPU and GPU architectures as well. With the maximized concurrency at multiple levels the GeantV is intended to be the successor of the Geant4 particle transport code that has been used since two decades succe...
International Nuclear Information System (INIS)
Elbast, M.; Saudo, A.; Franck, D.; Petitot, F.; Desbree, A.
2008-01-01
Microdosimetry using Monte Carlo simulation is a suitable technique to describe the stochastic nature of energy deposition by alpha particle at cellular level. Because of its short range, the energy imparted by this particle to the targets is highly non-uniform. Thus, to achieve accurate dosimetric results, the modelling of the geometry should be as realistic as possible. The objectives of the present study were to validate the use of the MCNPX and Geant4 Monte Carlo codes for microdosimetric studies using simple and three-dimensional voxelised geometry and to study their limit of validity in this last case. To that aim, the specific energy (z) deposited in the cell nucleus, the single-hit density of specific energy f 1 (z) and the mean-specific energy 1 > were calculated. Results show a good agreement when compared with the literature using simple geometry. The maximum percentage difference found 1 (z) obtained with MCNPX for <1 μm voxel size presents a significant difference with the shape of non-voxelised geometry. When using Geant4, little differences are observed whatever the voxel size is. Below 1 μm, the use of Geant4 is required. However, the calculation time is 10 times higher with Geant4 than MCNPX code in the same conditions. (authors)
Particle Tracking Model and Abstraction of Transport Processes
International Nuclear Information System (INIS)
Robinson, B.
2004-01-01
The purpose of this report is to document the abstraction model being used in total system performance assessment (TSPA) model calculations for radionuclide transport in the unsaturated zone (UZ). The UZ transport abstraction model uses the particle-tracking method that is incorporated into the finite element heat and mass model (FEHM) computer code (Zyvoloski et al. 1997 [DIRS 100615]) to simulate radionuclide transport in the UZ. This report outlines the assumptions, design, and testing of a model for calculating radionuclide transport in the UZ at Yucca Mountain. In addition, methods for determining and inputting transport parameters are outlined for use in the TSPA for license application (LA) analyses. Process-level transport model calculations are documented in another report for the UZ (BSC 2004 [DIRS 164500]). Three-dimensional, dual-permeability flow fields generated to characterize UZ flow (documented by BSC 2004 [DIRS 169861]; DTN: LB03023DSSCP9I.001 [DIRS 163044]) are converted to make them compatible with the FEHM code for use in this abstraction model. This report establishes the numerical method and demonstrates the use of the model that is intended to represent UZ transport in the TSPA-LA. Capability of the UZ barrier for retarding the transport is demonstrated in this report, and by the underlying process model (BSC 2004 [DIRS 164500]). The technical scope, content, and management of this report are described in the planning document ''Technical Work Plan for: Unsaturated Zone Transport Model Report Integration'' (BSC 2004 [DIRS 171282]). Deviations from the technical work plan (TWP) are noted within the text of this report, as appropriate. The latest version of this document is being prepared principally to correct parameter values found to be in error due to transcription errors, changes in source data that were not captured in the report, calculation errors, and errors in interpretation of source data
Characterization of molecule and particle transport through nanoscale conduits
Alibakhshi, Mohammad Amin
Nanofluidic devices have been of great interest due to their applications in variety of fields, including energy conversion and storage, water desalination, biological and chemical separations, and lab-on-a-chip devices. Although these applications cross the boundaries of many different disciplines, they all share the demand for understanding transport in nanoscale conduits. In this thesis, different elusive aspects of molecule and particle transport through nanofluidic conduits are investigated, including liquid and ion transport in nanochannels, diffusion- and reaction-governed enzyme transport in nanofluidic channels, and finally translocation of nanobeads through nanopores. Liquid or solvent transport through nanoconfinements is an essential yet barely characterized component of any nanofluidic systems. In the first chapter, water transport through single hydrophilic nanochannels with heights down to 7 nm is experimentally investigated using a new measurement technique. This technique has been developed based on the capillary flow and a novel hybrid nanochannel design and is capable of characterizing flow in both single nanoconduits as well as nanoporous media. The presence of a 0.7 nm thick hydration layer on hydrophilic surfaces and its effect on increasing the hydraulic resistance of the nanochannels is verified. Next, ion transport in a new class of nanofluidic rectifiers is theoretically and experimentally investigated. These so called nanofluidic diodes are nanochannels with asymmetric geometries which preferentially allow ion transport in one direction. A nondimensional number as a function of electrolyte concentration, nanochannel dimensions, and surface charge is derived that summarizes the rectification behavior of this system. In the fourth chapter, diffusion- and reaction-governed enzyme transport in nanofluidic channels is studied and the theoretical background necessary for understanding enzymatic activity in nanofluidic channels is presented. A
Measurement of particle transport coefficients on Alcator C-Mod
Energy Technology Data Exchange (ETDEWEB)
Luke, T.C.T.
1994-10-01
The goal of this thesis was to study the behavior of the plasma transport during the divertor detachment in order to explain the central electron density rise. The measurement of particle transport coefficients requires sophisticated diagnostic tools. A two color interferometer system was developed and installed on Alcator C-Mod to measure the electron density with high spatial ({approx} 2 cm) and high temporal ({le} 1.0 ms) resolution. The system consists of 10 CO{sub 2} (10.6 {mu}m) and 4 HeNe (.6328 {mu}m) chords that are used to measure the line integrated density to within 0.08 CO{sub 2} degrees or 2.3 {times} 10{sup 16}m{sup {minus}2} theoretically. Using the two color interferometer, a series of gas puffing experiments were conducted. The density was varied above and below the threshold density for detachment at a constant magnetic field and plasma current. Using a gas modulation technique, the particle diffusion, D, and the convective velocity, V, were determined. Profiles were inverted using a SVD inversion and the transport coefficients were extracted with a time regression analysis and a transport simulation analysis. Results from each analysis were in good agreement. Measured profiles of the coefficients increased with the radius and the values were consistent with measurements from other experiments. The values exceeded neoclassical predictions by a factor of 10. The profiles also exhibited an inverse dependence with plasma density. The scaling of both attached and detached plasmas agreed well with this inverse scaling. This result and the lack of change in the energy and impurity transport indicate that there was no change in the underlying transport processes after detachment.
Measurement of particle transport coefficients on Alcator C-Mod
International Nuclear Information System (INIS)
Luke, T.C.T.
1994-10-01
The goal of this thesis was to study the behavior of the plasma transport during the divertor detachment in order to explain the central electron density rise. The measurement of particle transport coefficients requires sophisticated diagnostic tools. A two color interferometer system was developed and installed on Alcator C-Mod to measure the electron density with high spatial (∼ 2 cm) and high temporal (≤ 1.0 ms) resolution. The system consists of 10 CO 2 (10.6 μm) and 4 HeNe (.6328 μm) chords that are used to measure the line integrated density to within 0.08 CO 2 degrees or 2.3 x 10 16 m -2 theoretically. Using the two color interferometer, a series of gas puffing experiments were conducted. The density was varied above and below the threshold density for detachment at a constant magnetic field and plasma current. Using a gas modulation technique, the particle diffusion, D, and the convective velocity, V, were determined. Profiles were inverted using a SVD inversion and the transport coefficients were extracted with a time regression analysis and a transport simulation analysis. Results from each analysis were in good agreement. Measured profiles of the coefficients increased with the radius and the values were consistent with measurements from other experiments. The values exceeded neoclassical predictions by a factor of 10. The profiles also exhibited an inverse dependence with plasma density. The scaling of both attached and detached plasmas agreed well with this inverse scaling. This result and the lack of change in the energy and impurity transport indicate that there was no change in the underlying transport processes after detachment
Energy Technology Data Exchange (ETDEWEB)
Zhaoyuan Liu; Kord Smith; Benoit Forget; Javier Ortensi
2016-05-01
A new method for computing homogenized assembly neutron transport cross sections and dif- fusion coefficients that is both rigorous and computationally efficient is proposed in this paper. In the limit of a homogeneous hydrogen slab, the new method is equivalent to the long-used, and only-recently-published CASMO transport method. The rigorous method is used to demonstrate the sources of inaccuracy in the commonly applied “out-scatter” transport correction. It is also demonstrated that the newly developed method is directly applicable to lattice calculations per- formed by Monte Carlo and is capable of computing rigorous homogenized transport cross sections for arbitrarily heterogeneous lattices. Comparisons of several common transport cross section ap- proximations are presented for a simple problem of infinite medium hydrogen. The new method has also been applied in computing 2-group diffusion data for an actual PWR lattice from BEAVRS benchmark.
Energy Technology Data Exchange (ETDEWEB)
Shetty, Nikhil Vittal
2013-01-31
AGATE is a project envisaged to demonstrate the feasibility of transmutation in a gas (helium) cooled accelerator-driven system using solid spallation target. Development of the spallation target module and assessing its safety aspects are studied in this work. According to the AGATE concept parameters, 600 MeV protons are delivered on to the segmented tungsten spallation target. The Monte Carlo toolkit Geant4 has been used in the simulation of particle transport. Binary cascade is used to simulate intra-nuclear cascades, along with the G4NDL neutron data library for low energy neutrons (<20 MeV).
DEFF Research Database (Denmark)
Taasti, Vicki Trier; Knudsen, Helge; Holzscheiter, Michael
2015-01-01
–Teller Z-law, which is implemented by default in SHIELD-HIT12A has been shown not to be a good approximation for the capture probability of negative projectiles by nuclei. We investigate other theories which have been developed, and give a better agreement with experimental findings. The consequence...... sections, which restores the agreement, but some small deviations still remain. Best agreement is achieved by using the most recent antiproton collision cross sections and the Fermi–Teller Z-law, even if experimental data conclude that the Z-law is inadequately describing annihilation on compounds. We...
Directory of Open Access Journals (Sweden)
T. Hada
Full Text Available Energetic particles and MHD waves are studied using simultaneous ISEE-3 data to investigate particle propagation and scattering between the source near the Sun and 1 AU. 3 He-rich events are of particular interest because they are typically low intensity "scatter-free" events. The largest solar proton events are of interest because they have been postulated to generate their own waves through beam instabilities. For 3 He-rich events, simultaneous interplanetary magnetic spectra are measured. The intensity of the interplanetary "fossil" turbulence through which the particles have traversed is found to be at the "quiet" to "intermediate" level of IMF activity. Pitch angle scattering rates and the corresponding particle mean free paths lW - P are calculated using the measured wave intensities, polarizations, and k directions. The values of lW - P are found to be ~ 5 times less than the value of lHe , the latter derived from He intensity and anisotropy time profiles. It is demonstrated by computer simulation that scattering rates through a 90° pitch angle are lower than that of other pitch angles, and that this is a possible explanation for the discrepancy between the lW - P and lHe values. At this time the scattering mechanism(s is unknown. We suggest a means where a direct comparison between the two l values could be made. Computer simulations indicate that although scattering through 90° is lower, it still occurs. Possibilities are either large pitch angle scattering through resonant interactions, or particle mirroring off of field compression regions. The largest solar proton events are analyzed to investigate the possibilities of local wave generation at 1 AU. In accordance with the results of a previous calculation (Gary et al., 1985 of beam stability, proton beams at 1 AU are found to be marginally stable. No evidence for substantial wave amplitude was found. Locally generated waves, if present, were less than 10-3 nT 2 Hz-1 at the leading
Monte Carlo problem and parallel computers, and how to do a fast particle mover on the STAR 100
International Nuclear Information System (INIS)
Sinz, K.H.P.H.
1975-01-01
Particle simulation problems of the Monte Carlo type are widely believed to be intrinsically highly scalar problems. In the absence of a definitive mathematical theorem to the contrary, this belief is based on the very apparent programming difficulties encountered on a vector machine. This class of problem is therefore thought to be ill-suited to highly parallel and vectorized computers. However, it is demonstrated by several examples that a particle mover is fully vectorizable. In the case of the CDC STAR 100 it is found that the performance of such a particle mover is not hopeless but hopeful, and is in fact helpful. One of the several possible vectorizations is estimated to yield a gain of a factor of 15 on the STAR over good serial coding on the same machine. This falls far short of the STAR's peak vector performance of 30 to 70 times scalar rates because certain fast vector instructions are not available and have to be simulated. The current STAR algorithm outperforms the carefully handcoded 7600 by a factor of 3. This performance margin is achievable despite the 7600's fivefold superior scalar capability. A more generally vectorized particle mover will always substantially outperform scalar coding on any machine equipped with a properly chosen set of fast vector instructions. (U.S.)
Energy Technology Data Exchange (ETDEWEB)
Badal, Andreu; Badano, Aldo [Division of Imaging and Applied Mathematics, OSEL, CDRH, U.S. Food and Drug Administration, Silver Spring, Maryland 20993-0002 (United States)
2009-11-15
Purpose: It is a known fact that Monte Carlo simulations of radiation transport are computationally intensive and may require long computing times. The authors introduce a new paradigm for the acceleration of Monte Carlo simulations: The use of a graphics processing unit (GPU) as the main computing device instead of a central processing unit (CPU). Methods: A GPU-based Monte Carlo code that simulates photon transport in a voxelized geometry with the accurate physics models from PENELOPE has been developed using the CUDA programming model (NVIDIA Corporation, Santa Clara, CA). Results: An outline of the new code and a sample x-ray imaging simulation with an anthropomorphic phantom are presented. A remarkable 27-fold speed up factor was obtained using a GPU compared to a single core CPU. Conclusions: The reported results show that GPUs are currently a good alternative to CPUs for the simulation of radiation transport. Since the performance of GPUs is currently increasing at a faster pace than that of CPUs, the advantages of GPU-based software are likely to be more pronounced in the future.
Experimental study of particle transport and density fluctuation in LHD
International Nuclear Information System (INIS)
Tanaka, K.; Morita, S.; Sanin, A.; Michael, C.; Kawahata, K.; Yamada, H.; Miyazawa, J.; Tokuzawa, T.; Akiyama, T.; Goto, M.; Ida, K.; Yoshinuma, M.; Narihara, K.; Yamada, I.; Yokoyama, M.; Masuzaki, S.; Morisaki, T.; Sakamoto, R.; Funaba, H.; Komori, A.; Vyacheslavov, L.N.; Murakami, S.; Wakasa, A.
2005-01-01
A variety of electron density (n e ) profiles have been observed in Large Helical Device (LHD). The density profiles change dramatically with heating power and toroidal magnetic field (B t ) under the same line averaged density. The particle transport coefficients, i.e., diffusion coefficient (D) and convection velocity (V) are experimentally obtained from density modulation experiments in the standard configuration. The values of D and V are estimated separately at the core and edge. The diffusion coefficients are strong function of electron temperature (T e ) and are proportional to T e 1.7±0.9 in core and T e 1.1±0.14 in edge. And edge diffusion coefficients are proportional to B t -2.08 . It is found that the scaling of D in edge is close to gyro-Bohm-like in nature. The existence of non-zero V is observed. It is observed that the electron temperature (T e ) gradient can drive particle convection. This is particularly clear in the core region. The convection velocity in the core region reverses direction from inward to outward as the T e gradient increases. In the edge, the convection is inward directed in the most of the case of the present data set. And it shows modest tendency, whose value is proportional to T e gradient keeping inward direction. However, the toroidal magnetic field also significantly affects value and direction of V. The spectrum of density fluctuation changes at different heating power suggesting that it has an influence on particle transport. The peak wavenumber is around 0.1 times the inversed ion Larmor radius, as is expected from gyro-Bohm diffusion. The peaks of fluctuation intensity are localized at the plasma edge, where density gradient becomes negative and diffusion contributes most to the particle flux. These results suggest a qualitative correlation of fluctuations with particle diffusion. (author)
Fathi, Kamran
The high uncertainty in the Relative Biological Effectiveness (RBE) values of particle therapy beams, which are used in combination with the quantity absorbed dose in radiotherapy, together with the increase in the number of particle therapy centres worldwide necessitate a better understating of the biological effect of such modalities. The present novel study is part of performance testing and development of a microcalorimeter based on Superconducting QUantum Interference Devices (SQUIDs). Unlike other microdosimetric detectors that are used for investigating the energy distribution, this detector provides a direct measurement of energy deposition at the micrometer scale, that can be used to improve our understanding of biological effects in particle therapy application, radiation protection and environmental dosimetry. Temperature rises of less than 1 muK are detectable and when combined with the low specific heat capacity of the absorber at cryogenic temperature, extremely high energy deposition sensitivity of approximately 0.4 eV can be achieved. The detector consists of three layers: a Tissue Equivalent (TE) absorber, a SuperConducting (SC) absorber and a silicon substrate. Ideally all energy would be deposited in the TE absorber and the heat rise in the SC layer would arise due to heat conduction from the TE layer. However, in practice direct particle absorption occurs in all three layers and must be corrected for. To investigate the thermal behavior within the detector, and quantify any possible correction, particle tracks were simulated employing Geant4 (v9.6) Monte Carlo simulations. The track information was then passed to the COMSOL Multiphysics (Finite Element Method) software. The 3D heat transfer within each layer was then evaluated in a time-dependent model. For a statistically reliable outcome, the simulations had to be repeated for a large number of particles. An automated system has been developed that couples Geant4 Monte Carlo output to COMSOL
Charged-particle thermonuclear reaction rates: I. Monte Carlo method and statistical distributions
International Nuclear Information System (INIS)
Longland, R.; Iliadis, C.; Champagne, A.E.; Newton, J.R.; Ugalde, C.; Coc, A.; Fitzgerald, R.
2010-01-01
A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended 'classical' rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless 'minimum' (or 'lower limit') and 'maximum' (or 'upper limit') reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters μ and σ. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this issue (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.
Premar-2: a Monte Carlo code for radiative transport simulation in atmospheric environments
Energy Technology Data Exchange (ETDEWEB)
Cupini, E. [ENEA, Centro Ricerche Ezio Clementel, Bologna, (Italy). Dipt. Innovazione
1999-07-01
The peculiarities of the PREMAR-2 code, aimed at radiation transport Monte Carlo simulation in atmospheric environments in the infrared-ultraviolet frequency range, are described. With respect to the previously developed PREMAR code, besides plane multilayers, spherical multilayers and finite sequences of vertical layers, each one with its own atmospheric behaviour, are foreseen in the new code, together with the refraction phenomenon, so that long range, highly slanted paths can now be more faithfully taken into account. A zenithal angular dependence of the albedo coefficient has moreover been introduced. Lidar systems, with spatially independent source and telescope, are allowed again to be simulated, and, in this latest version of the code, sensitivity analyses to be performed. According to this last feasibility, consequences on radiation transport of small perturbations in physical components of the atmospheric environment may be analyze and the related effects on searched results estimated. The availability of a library of physical data (reaction coefficients, phase functions and refraction indexes) is required by the code, providing the essential features of the environment of interest needed of the Monte Carlo simulation. Variance reducing techniques have been enhanced in the Premar-2 code, by introducing, for instance, a local forced collision technique, especially apt to be used in Lidar system simulations. Encouraging comparisons between code and experimental results carried out at the Brasimone Centre of ENEA, have so far been obtained, even if further checks of the code are to be performed. [Italian] Nel presente rapporto vengono descritte le principali caratteristiche del codice di calcolo PREMAR-2, che esegue la simulazione Montecarlo del trasporto della radiazione elettromagnetica nell'atmosfera, nell'intervallo di frequenza che va dall'infrarosso all'ultravioletto. Rispetto al codice PREMAR precedentemente sviluppato, il codice
A zero-variance based scheme for Monte Carlo criticality simulations
Christoforou, S.
2010-01-01
The ability of the Monte Carlo method to solve particle transport problems by simulating the particle behaviour makes it a very useful technique in nuclear reactor physics. However, the statistical nature of Monte Carlo implies that there will always be a variance associated with the estimate
International Nuclear Information System (INIS)
Roncali, Emilie; Schmall, Jeffrey P; Viswanath, Varsha; Berg, Eric; Cherry, Simon R
2014-01-01
Current developments in positron emission tomography focus on improving timing performance for scanners with time-of-flight (TOF) capability, and incorporating depth-of-interaction (DOI) information. Recent studies have shown that incorporating DOI correction in TOF detectors can improve timing resolution, and that DOI also becomes more important in long axial field-of-view scanners. We have previously reported the development of DOI-encoding detectors using phosphor-coated scintillation crystals; here we study the timing properties of those crystals to assess the feasibility of providing some level of DOI information without significantly degrading the timing performance. We used Monte Carlo simulations to provide a detailed understanding of light transport in phosphor-coated crystals which cannot be fully characterized experimentally. Our simulations used a custom reflectance model based on 3D crystal surface measurements. Lutetium oxyorthosilicate crystals were simulated with a phosphor coating in contact with the scintillator surfaces and an external diffuse reflector (teflon). Light output, energy resolution, and pulse shape showed excellent agreement with experimental data obtained on 3 × 3 × 10 mm 3 crystals coupled to a photomultiplier tube. Scintillator intrinsic timing resolution was simulated with head-on and side-on configurations, confirming the trends observed experimentally. These results indicate that the model may be used to predict timing properties in phosphor-coated crystals and guide the coating for optimal DOI resolution/timing performance trade-off for a given crystal geometry. Simulation data suggested that a time stamp generated from early photoelectrons minimizes degradation of the timing resolution, thus making this method potentially more useful for TOF-DOI detectors than our initial experiments suggested. Finally, this approach could easily be extended to the study of timing properties in other scintillation crystals, with a
Implementation of a Monte Carlo algorithm for neutron transport on a massively parallel SIMD machine
International Nuclear Information System (INIS)
Baker, R.S.
1993-01-01
We present some results from the recent adaptation of a vectorized Monte Carlo algorithm to a massively parallel architecture. The performance of the algorithm on a single processor Cray Y-MP and a Thinking Machine Corporations CM-2 and CM-200 is compared for several test problems. The results show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when the algorithms are applied to realistic problems which require extensive variance reduction. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well. (orig.)
ITS Version 3.0: The Integrated TIGER Series of coupled electron/photon Monte Carlo transport codes
International Nuclear Information System (INIS)
Halbleib, J.A.; Kensek, R.P.; Valdez, G.D.; Mehlhorn, T.A.; Seltzer, S.M.; Berger, M.J.
1993-01-01
ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. It combines operational simplicity and physical accuracy in order to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Flexibility of construction permits tailoring of the codes to specific applications and extension of code capabilities to more complex applications through simple update procedures
DEFF Research Database (Denmark)
Salling, Kim Bang; Leleur, Steen
2006-01-01
calculation, where risk analysis (RA) is car-ried out using Monte Carlo Simulation (MCS). After a de-scription of the deterministic and stochastic calculations emphasis is paid to the RA part of CBA-DK with consid-erations about which probability distributions to make use of. Furthermore, a comprehensive......This paper presents the Danish CBA-DK software model for assessment of transport infrastructure projects. The as-sessment model is based on both a deterministic calcula-tion following the cost-benefit analysis (CBA) methodol-ogy in a Danish manual from the Ministry of Transport and on a stochastic...
Monte-Carlo Impurity transport simulations in the edge of the DIII-D tokamak using the MCI code
International Nuclear Information System (INIS)
Evans, T.E.; Mahdavi, M.A.; Sager, G.T.; West, W.P.; Fenstermacher, M.E.; Meyer, W.H.; Porter, G.D.
1995-07-01
A Monte-Carlo Impurity (MCI) transport code is used to follow trace impurities through multiple ionization states in realistic 2-D tokamak geometries. The MCI code is used to study impurity transport along the open magnetic field lines of the Scrape-off Layer (SOL) and to understand how impurities get into the core from the SOL. An MCI study concentrating on the entrainment of carbon impurities ions by deuterium background plasma into the DIII-D divertor is discussed. MCI simulation results are compared to experimental DIII-D carbon measurements
Geometry system used in the General Monte Carlo transport code SPARTAN
International Nuclear Information System (INIS)
Bending, R.C.; Easter, P.G.
1974-01-01
The geometry routines used in the general-purpose, three-dimensional particle transport code SPARTAN are described. The code is designed to deal with the very complex geometries encountered in lattice cell and fuel handling calculations, health physics, and shielding problems. Regions of the system being studied may be represented by simple shapes (spheres, cylinders, and so on) or by multinomial surfaces of any order, and many simple shapes may be combined to make up a complex layout. The geometry routines are designed to allow the program to carry out a number of tasks (such as sampling for a random point or tracking a path through several regions) in any order, so that the use of the routines is not restricted to a particular tracking or scoring method. Routines for reading, checking, and printing the data are included. (U.S.)
A concurrent vector-based steering framework for particle transport
International Nuclear Information System (INIS)
Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
2014-01-01
High Energy Physics has traditionally been a technology-limited science that has pushed the boundaries of both the detectors collecting the information about the particles and the computing infrastructure processing this information. However, since a few years the increase in computing power comes in the form of increased parallelism at all levels, and High Energy Physics has now to optimise its code to take advantage of the new architectures, including GPUs and hybrid systems. One of the primary targets for optimisation is the particle transport code used to simulate the detector response, as it is largely experiment independent and one of the most demanding applications in terms of CPU resources. The Geant Vector Prototype project aims to explore innovative designs in particle transport aimed at obtaining maximal performance on the new architectures. This paper describes the current status of the project and its future perspectives. In particular we describe how the present design tries to expose the parallelism of the problem at all possible levels, in a design that is aimed at minimising contentions and maximising concurrency, both at the coarse granularity level (threads) and at the micro granularity one (vectorisation, instruction pipelining, multiple instructions per cycle). The future plans and perspectives will also be mentioned
A ballistic transport model for electronic excitation following particle impact
Hanke, S.; Heuser, C.; Weidtmann, B.; Wucher, A.
2018-01-01
We present a ballistic model for the transport of electronic excitation energy induced by keV particle bombardment onto a solid surface. Starting from a free electron gas model, the Boltzmann transport equation (BTE) is employed to follow the evolution of the temporal and spatial distribution function f (r → , k → , t) describing the occupation probability of an electronic state k → at position r → and time t. Three different initializations of the distribution function are considered: i) a thermal distribution function with a locally and temporally elevated electron temperature, ii) a peak excitation at a specific energy above the Fermi level with a quasi-isotropic distribution in k-space and iii) an anisotropic peak excitation with k-vectors oriented in a specific transport direction. While the first initialization resembles a distribution function which may, for instance, result from electronic friction of moving atoms within an ion induced collision cascade, the peak excitation can in principle result from an autoionization process after excitation in close binary collisions. By numerically solving the BTE, we study the electronic energy exchange along a one dimensional transport direction to obtain a time and space resolved excitation energy distribution function, which is then analyzed in view of general transport characteristics of the chosen model system.
Wan Chan Tseung, H; Ma, J; Beltran, C
2015-06-01
Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on graphics processing units (GPUs). However, these MCs usually use simplified models for nonelastic proton-nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and nonelastic proton-nucleus collisions. Using the cuda framework, the authors implemented GPU kernels for the following tasks: (1) simulation of beam spots from our possible scanning nozzle configurations, (2) proton propagation through CT geometry, taking into account nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) modeling of the intranuclear cascade stage of nonelastic interactions when they occur, (4) simulation of nuclear evaporation, and (5) statistical error estimates on the dose. To validate our MC, the authors performed (1) secondary particle yield calculations in proton collisions with therapeutically relevant nuclei, (2) dose calculations in homogeneous phantoms, (3) recalculations of complex head and neck treatment plans from a commercially available treatment planning system, and compared with (GEANT)4.9.6p2/TOPAS. Yields, energy, and angular distributions of secondaries from nonelastic collisions on various nuclei are in good agreement with the (GEANT)4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%-2 mm for treatment plan simulations is typically 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is ∼ 20 s for 1 × 10(7) proton histories. Our GPU-based MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus. Dosimetric calculations are in very good agreement with (GEANT)4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans, and is being used as the dose calculation engine in a clinically
Recently developed methods in neutral-particle transport calculations: overview
International Nuclear Information System (INIS)
Alcouffe, R.E.
1982-01-01
It has become increasingly apparent that successful, general methods for the solution of the neutral particle transport equation involve a close connection between the spatial-discretization method used and the source-acceleration method chosen. The first form of the transport equation, angular discretization which is discrete ordinates is considered as well as spatial discretization based upon a mesh arrangement. Characteristic methods are considered briefly in the context of future, desirable developments. The ideal spatial-discretization method is described as having the following attributes: (1) positive-positive boundary data yields a positive angular flux within the mesh including its boundaries; (2) satisfies the particle balance equation over the mesh, that is, the method is conservative; (3) possesses the diffusion limit independent of spatial mesh size, that is, for a linearly isotropic flux assumption, the transport differencing reduces to a suitable diffusion equation differencing; (4) the method is unconditionally acceleratable, i.e., for each mesh size, the method is unconditionally convergent with a source iteration acceleration. It is doubtful that a single method possesses all these attributes for a general problem. Some commonly used methods are outlined and their computational performance and usefulness are compared; recommendations for future development are detailed, which include practical computational considerations
High energy particle transport code NMTC/JAM
International Nuclear Information System (INIS)
Niita, Koji; Meigo, Shin-ichiro; Takada, Hiroshi; Ikeda, Yujiro
2001-03-01
We have developed a high energy particle transport code NMTC/JAM, which is an upgraded version of NMTC/JAERI97. The applicable energy range of NMTC/JAM is extended in principle up to 200 GeV for nucleons and mesons by introducing the high energy nuclear reaction code JAM for the intra-nuclear cascade part. For the evaporation and fission process, we have also implemented a new model, GEM, by which the light nucleus production from the excited residual nucleus can be described. According to the extension of the applicable energy, we have upgraded the nucleon-nucleus non-elastic, elastic and differential elastic cross section data by employing new systematics. In addition, the particle transport in a magnetic field has been implemented for the beam transport calculations. In this upgrade, some new tally functions are added and the format of input of data has been improved very much in a user friendly manner. Due to the implementation of these new calculation functions and utilities, consequently, NMTC/JAM enables us to carry out reliable neutronics study of a large scale target system with complex geometry more accurately and easily than before. This report serves as a user manual of the code. (author)
Production and global transport of Titan's sand particles
Barnes, Jason W.; Lorenz, Ralph D.; Radebaugh, Jani; Hayes, Alexander G.; Arnold, Karl; Chandler, Clayton
2015-06-01
Previous authors have suggested that Titan's individual sand particles form by either sintering or by lithification and erosion. We suggest two new mechanisms for the production of Titan's organic sand particles that would occur within bodies of liquid: flocculation and evaporitic precipitation. Such production mechanisms would suggest discrete sand sources in dry lakebeds. We search for such sources, but find no convincing candidates with the present Cassini Visual and Infrared Mapping Spectrometer coverage. As a result we propose that Titan's equatorial dunes may represent a single, global sand sea with west-to-east transport providing sources and sinks for sand in each interconnected basin. The sand might then be transported around Xanadu by fast-moving Barchan dune chains and/or fluvial transport in transient riverbeds. A river at the Xanadu/Shangri-La border could explain the sharp edge of the sand sea there, much like the Kuiseb River stops the Namib Sand Sea in southwest Africa on Earth. Future missions could use the composition of Titan's sands to constrain the global hydrocarbon cycle.
Size segregation in bedload sediment transport at the particle scale
Frey, P.; Martin, T.
2011-12-01
Bedload, the larger material that is transported in stream channels, has major consequences, for the management of water resources, for environmental sustainability, and for flooding alleviation. Most particularly, in mountains, steep slopes drive intense transport of a wide range of grain sizes. Our ability to compute local and even bulk quantities such as the sediment flux in rivers is poor. One important reason is that grain-grain interactions in stream channels may have been neglected. An arguably most important difficulty pertains to the very wide range of grain size leading to grain size sorting or segregation. This phenomenon largely modifies fluxes and results in patterns that can be seen ubiquitously in nature such as armoring or downstream fining. Most studies have concerned the spontaneous percolation of fine grains into immobile gravels, because of implications for salmonid spawning beds, or stratigraphical interpretation. However when the substrate is moving, the segregation process is different as statistically void openings permit downward percolation of larger particles. This process also named "kinetic sieving" has been studied in industrial contexts where segregation of granular or powder materials is often non-desirable. We present an experimental study of two-size mixtures of coarse spherical glass beads entrained by a shallow turbulent and supercritical water flow down a steep channel with a mobile bed. The particle diameters were 4 and 6mm, the channel width 6.5mm and the channel inclination ranged from 7.5 to 12.5%. The water flow rate and the particle rate were kept constant at the upstream entrance. First only the coarser particle rate was input and adjusted to obtain bed load equilibrium, that is, neither bed degradation nor aggradation over sufficiently long time intervals. Then a low rate of smaller particles (about 1% of the total sediment rate) was introduced to study the spatial and temporal evolution of segregating smaller particles
Comparison of Monte Carlo method and deterministic method for neutron transport calculation
International Nuclear Information System (INIS)
Mori, Takamasa; Nakagawa, Masayuki
1987-01-01
The report outlines major features of the Monte Carlo method by citing various applications of the method and techniques used for Monte Carlo codes. Major areas of its application include analysis of measurements on fast critical assemblies, nuclear fusion reactor neutronics analysis, criticality safety analysis, evaluation by VIM code, and calculation for shielding. Major techniques used for Monte Carlo codes include the random walk method, geometric expression method (combinatorial geometry, 1, 2, 4-th degree surface and lattice geometry), nuclear data expression, evaluation method (track length, collision, analog (absorption), surface crossing, point), and dispersion reduction (Russian roulette, splitting, exponential transform, importance sampling, corrected sampling). Major features of the Monte Carlo method are as follows: 1) neutron source distribution and systems of complex geometry can be simulated accurately, 2) physical quantities such as neutron flux in a place, on a surface or at a point can be evaluated, and 3) calculation requires less time. (Nogami, K.)
International Nuclear Information System (INIS)
Peng Xiaoling; Min Yong; Ma Tianyu; Luo Wei; Yan Mi
2009-01-01
The structures of suspensions comprised of magnetic and nonmagnetic particles in magnetic fields are studied using two-dimensional Monte Carlo simulations. The magnetic interaction among magnetic particles, magnetic field strength, and concentrations of both magnetic and nonmagnetic particles are considered as key influencing factors in the present work. The results show that chain-like clusters of magnetic particles are formed along the field direction. The size of the clusters increases with increasing magnetic interaction between magnetic particles, while it keeps nearly unchanged as the field strength increases. As the concentration of magnetic particles increases, both the number and size of the clusters increase. Moreover, nonmagnetic particles are found to hinder the migration of magnetic ones. As the concentration of nonmagnetic particles increases, the hindrance on migration of magnetic particles is enhanced
On the use of antithetic variates in particle transport problems
International Nuclear Information System (INIS)
Milgram, M.S.
2001-01-01
The possible use of antithetic variates as a method of variance reduction in particle transport problems is investigated, by performing some numerical experiments. It is found that if variance reduction is not very carefully defined, it is possible, with antithetic variates, to spuriously detect reduction, or not detect true reduction. Once such subtleties are overcome, it is shown that antithetic variates can reduce variance in multidimensional integration up to a point. The phenomenon of spontaneous correlation is defined and identified as the cause of failure. The surprising result that it sometimes pays to track non-contributing particle histories is demonstrated by means of a zero variance integration analogue. The principles developed in the investigation of multi-variable integration are then employed in a simple calculation of energy deposition using the EGS4 computer code. Promising results are obtained for the total energy deposition problem, but the depth/dose problem remains unsolved. Possible means of overcoming the difficulties are suggested
Computational transport phenomena of fluid-particle systems
Arastoopour, Hamid; Abbasi, Emad
2017-01-01
This book concerns the most up-to-date advances in computational transport phenomena (CTP), an emerging tool for the design of gas-solid processes such as fluidized bed systems. The authors examine recent work in kinetic theory and CTP and illustrate gas-solid processes’ many applications in the energy, chemical, pharmaceutical, and food industries. They also discuss the kinetic theory approach in developing constitutive equations for gas-solid flow systems and how it has advanced over the last decade as well as the possibility of obtaining innovative designs for multiphase reactors, such as those needed to capture CO2 from flue gases. Suitable as a concise reference and a textbook supplement for graduate courses, Computational Transport Phenomena of Gas-Solid Systems is ideal for practitioners in industries involved with the design and operation of processes based on fluid/particle mixtures, such as the energy, chemicals, pharmaceuticals, and food processing. Explains how to couple the population balance e...
Numerical simulation of fluid particle transport through porous media
Najam, S
1999-01-01
The work presented in this report aims at the numerical simulation of fluid particle transport through porous medium. For this purpose various mathematical models and numerical schemes are studied. A mathematical model is derived based on Darcy's Law and continuity equation, it is discretized using finite difference schemes and Guass Seidal iterative procedure is used as a solver. For transient problems Crank Nicolson's method is used. Finally a software in Visual Basic 3.0 is developed that can simulate fluid transport through porous medium by promoting the user to specify the material and geometrical properties of the medium. The unknown pressure heads can be determined at various nodal points and the results are visualized by the colored grid display or by the surface plots.
Directory of Open Access Journals (Sweden)
Song Hyun Kim
2015-08-01
Full Text Available Chord length sampling method in Monte Carlo simulations is a method used to model spherical particles with random sampling technique in a stochastic media. It has received attention due to the high calculation efficiency as well as user convenience; however, a technical issue regarding boundary effect has been noted. In this study, after analyzing the distribution characteristics of spherical particles using an explicit method, an alternative chord length sampling method is proposed. In addition, for modeling in finite media, a correction method of the boundary effect is proposed. Using the proposed method, sample probability distributions and relative errors were estimated and compared with those calculated by the explicit method. The results show that the reconstruction ability and modeling accuracy of the particle probability distribution with the proposed method were considerably high. Also, from the local packing fraction results, the proposed method can successfully solve the boundary effect problem. It is expected that the proposed method can contribute to the increasing of the modeling accuracy in stochastic media.
International Nuclear Information System (INIS)
Kondo, Shuji; Nanbu, Kenichi
2001-01-01
An axisymmetrical particle-in-cell/Monte Carlo simulation is performed for modeling direct current-driven planar magnetron discharge. The axisymmetrical structure of plasma parameters such as plasma density, electric field, and electron and ion energy is examined in detail. The effects of applied voltage and magnetic field strength on the discharge are also clarified. The model apparatus has a narrow target-anode gap of 20 mm to make the computational time manageable. This resulted in the current densities which are very low compared to actual experimental results for a wider target-anode gap. The current-voltage characteristics show a negative slope in contrast with many experimental results. However, this is understandable from Gu and Lieberman's similarity equation. The negative slope appears to be due to the narrow gap
Monte Carlo Library Least Square (MCLLS) Method for Multiple Radioactive Particle Tracking in BPR
Wang, Zhijian; Lee, Kyoung; Gardner, Robin
2010-03-01
In This work, a new method of radioactive particles tracking is proposed. An accurate Detector Response Functions (DRF's) was developed from MCNP5 to generate library for NaI detectors with a significant speed-up factor of 200. This just make possible for the idea of MCLLS method which is used for locating and tracking the radioactive particle in a modular Pebble Bed Reactor (PBR) by searching minimum Chi-square values. The method was tested to work pretty good in our lab condition with a six 2" X 2" NaI detectors array only. This method was introduced in both forward and inverse ways. A single radioactive particle tracking system with three collimated 2" X 2" NaI detectors is used for benchmark purpose.
Monte Carlo Simulations of New 2D Ripple Filters for Particle Therapy Facilities
DEFF Research Database (Denmark)
Ringbæk, Toke Printz; Weber, Uli; Petersen, Jørgen B.B.
2014-01-01
Introduction: At particle therapy facilities with pencil beam scanning, the implementation of a Ripple Filter (RiFi) broadens the Bragg peak (BP), which leads to fewer energy steps from the accelerator required to obtain a homogeneous dose coverage of the planned target volume (PTV...... for various ion types, initial particle energies and distances from the RiFi to the phantom surface as well as in the depth of the phantom. The beam delivery and monitor system (BAMS) used at Marburg, the Heidelberg Ionentherapiezentrum (HIT), Universit ̈tsklinikum Heidelberg, Germany and the GSI...... needed by TRiP, and for recalculating the physical dose distribution after TRiP optimization. Results: At short distances from the RiFi to the phantom surface fine structures in the dose distribution are observed. For various RiFis, ion types and initial particle energies the distance dmax at which...
International Nuclear Information System (INIS)
Franke, B.C.; Kensek, R.P.; Prinja, A.K.
2013-01-01
Stochastic-media simulations require numerous boundary crossings. We consider two Monte Carlo electron transport approaches and evaluate accuracy with numerous material boundaries. In the condensed-history method, approximations are made based on infinite-medium solutions for multiple scattering over some track length. Typically, further approximations are employed for material-boundary crossings where infinite-medium solutions become invalid. We have previously explored an alternative 'condensed transport' formulation, a Generalized Boltzmann-Fokker-Planck (GBFP) method, which requires no special boundary treatment but instead uses approximations to the electron-scattering cross sections. Some limited capabilities for analog transport and a GBFP method have been implemented in the Integrated Tiger Series (ITS) codes. Improvements have been made to the condensed history algorithm. The performance of the ITS condensed-history and condensed-transport algorithms are assessed for material-boundary crossings. These assessments are made both by introducing artificial material boundaries and by comparison to analog Monte Carlo simulations. (authors)
Cullen, D
2000-01-01
TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.
International Nuclear Information System (INIS)
Cullen, D.E
2000-01-01
TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files
International Nuclear Information System (INIS)
Xu, Y; Tian, Z; Jiang, S; Jia, X; Zhou, L
2015-01-01
Purpose: Monte Carlo (MC) simulation is an important tool to solve radiotherapy and medical imaging problems. Low computational efficiency hinders its wide applications. Conventionally, MC is performed in a particle-by -particle fashion. The lack of control on particle trajectory is a main cause of low efficiency in some applications. Take cone beam CT (CBCT) projection simulation as an example, significant amount of computations were wasted on transporting photons that do not reach the detector. To solve this problem, we propose an innovative MC simulation scheme with a path-by-path sampling method. Methods: Consider a photon path starting at the x-ray source. After going through a set of interactions, it ends at the detector. In the proposed scheme, we sampled an entire photon path each time. Metropolis-Hasting algorithm was employed to accept/reject a sampled path based on a calculated acceptance probability, in order to maintain correct relative probabilities among different paths, which are governed by photon transport physics. We developed a package gMMC on GPU with this new scheme implemented. The performance of gMMC was tested in a sample problem of CBCT projection simulation for a homogeneous object. The results were compared to those obtained using gMCDRR, a GPU-based MC tool with the conventional particle-by-particle simulation scheme. Results: Calculated scattered photon signals in gMMC agreed with those from gMCDRR with a relative difference of 3%. It took 3.1 hr. for gMCDRR to simulate 7.8e11 photons and 246.5 sec for gMMC to simulate 1.4e10 paths. Under this setting, both results attained the same ∼2% statistical uncertainty. Hence, a speed-up factor of ∼45.3 was achieved by this new path-by-path simulation scheme, where all the computations were spent on those photons contributing to the detector signal. Conclusion: We innovatively proposed a novel path-by-path simulation scheme that enabled a significant efficiency enhancement for MC particle
Energy Technology Data Exchange (ETDEWEB)
Bellezzo, Murillo
2014-09-01
As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo Method (MCM) has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this thesis, the CUBMC code is presented, a GPU-based MC photon transport algorithm for dose calculation under the Compute Unified Device Architecture (CUDA) platform. The simulation of physical events is based on the algorithm used in PENELOPE, and the cross section table used is the one generated by the MATERIAL routine, also present in PENELOPE code. Photons are transported in voxel-based geometries with different compositions. There are two distinct approaches used for transport simulation. The rst of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon ignores the existence of borders and travels in homogeneous fictitious media. The CUBMC code aims to be an alternative of Monte Carlo simulator code that, by using the capability of parallel processing of graphics processing units (GPU), provide high performance simulations in low cost compact machines, and thus can be applied in clinical cases and incorporated in treatment planning systems for radiotherapy. (author)
Czech Academy of Sciences Publication Activity Database
Moučka, F.; Nezbeda, Ivo
2010-01-01
Roč. 36, 7-8 (2010), s. 526-534 ISSN 0892-7022 Institutional research plan: CEZ:AV0Z40720504 Keywords : multi-particle move MC * graphics processing unit * polarisable model Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.215, year: 2010
Czech Academy of Sciences Publication Activity Database
Moučka, F.; Nezbeda, Ivo
2009-01-01
Roč. 35, č. 8 (2009), s. 660-672 ISSN 0892-7022 Institutional research plan: CEZ:AV0Z40720504 Keywords : multi-particle move MC * parallelization * reaction field Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.028, year: 2009
Evidence for particle transport between alveolar macrophages in vivo
Energy Technology Data Exchange (ETDEWEB)
Benson, J.M.; Nikula, K.J.; Guilmette, R.A.
1995-12-01
Recent studies at this Institute have focused on determining the role of alveolar macrophages (AMs) in the transport of particles within and form the lung. For those studies, AMs previously labeled using the nuclear stain Hoechst 33342 and polychromatic Fluoresbrite microspheres (1 {mu}m diameter, Polysciences, Inc., Warrington, PA) were instilled into lungs of recipient F344 rats. The fate of the donor particles and the doubly labeled AMs within recipient lungs was followed for 32 d. Within 2-4 d after instillation, the polychromatic microspheres were found in both donor and resident AMs, suggesting that particle transfer occurred between the donor and resident AMs. However, this may also have been an artifact resulting from phagocytosis of the microspheres form dead donor cells or from the fading or degradation of Hoechst 33342 within the donor cells leading to their misidentification as resident AMs. The results support the earlier findings that microspheres in donor AMs can be transferred to resident AMs within 2 d after instillation.
Particle transport model sensitivity on wave-induced processes
Staneva, Joanna; Ricker, Marcel; Krüger, Oliver; Breivik, Oyvind; Stanev, Emil; Schrum, Corinna
2017-04-01
Different effects of wind waves on the hydrodynamics in the North Sea are investigated using a coupled wave (WAM) and circulation (NEMO) model system. The terms accounting for the wave-current interaction are: the Stokes-Coriolis force, the sea-state dependent momentum and energy flux. The role of the different Stokes drift parameterizations is investigated using a particle-drift model. Those particles can be considered as simple representations of either oil fractions, or fish larvae. In the ocean circulation models the momentum flux from the atmosphere, which is related to the wind speed, is passed directly to the ocean and this is controlled by the drag coefficient. However, in the real ocean, the waves play also the role of a reservoir for momentum and energy because different amounts of the momentum flux from the atmosphere is taken up by the waves. In the coupled model system the momentum transferred into the ocean model is estimated as the fraction of the total flux that goes directly to the currents plus the momentum lost from wave dissipation. Additionally, we demonstrate that the wave-induced Stokes-Coriolis force leads to a deflection of the current. During the extreme events the Stokes velocity is comparable in magnitude to the current velocity. The resulting wave-induced drift is crucial for the transport of particles in the upper ocean. The performed sensitivity analyses demonstrate that the model skill depends on the chosen processes. The results are validated using surface drifters, ADCP, HF radar data and other in-situ measurements in different regions of the North Sea with a focus on the coastal areas. The using of a coupled model system reveals that the newly introduced wave effects are important for the drift-model performance, especially during extremes. Those effects cannot be neglected by search and rescue, oil-spill, transport of biological material, or larva drift modelling.
High energy electromagnetic particle transportation on the GPU
Energy Technology Data Exchange (ETDEWEB)
Canal, P. [Fermilab; Elvira, D. [Fermilab; Jun, S. Y. [Fermilab; Kowalkowski, J. [Fermilab; Paterno, M. [Fermilab; Apostolakis, J. [CERN
2014-01-01
We present massively parallel high energy electromagnetic particle transportation through a finely segmented detector on a Graphics Processing Unit (GPU). Simulating events of energetic particle decay in a general-purpose high energy physics (HEP) detector requires intensive computing resources, due to the complexity of the geometry as well as physics processes applied to particles copiously produced by primary collisions and secondary interactions. The recent advent of hardware architectures of many-core or accelerated processors provides the variety of concurrent programming models applicable not only for the high performance parallel computing, but also for the conventional computing intensive application such as the HEP detector simulation. The components of our prototype are a transportation process under a non-uniform magnetic field, geometry navigation with a set of solid shapes and materials, electromagnetic physics processes for electrons and photons, and an interface to a framework that dispatches bundles of tracks in a highly vectorized manner optimizing for spatial locality and throughput. Core algorithms and methods are excerpted from the Geant4 toolkit, and are modified and optimized for the GPU application. Program kernels written in C/C++ are designed to be compatible with CUDA and OpenCL and with the aim to be generic enough for easy porting to future programming models and hardware architectures. To improve throughput by overlapping data transfers with kernel execution, multiple CUDA streams are used. Issues with floating point accuracy, random numbers generation, data structure, kernel divergences and register spills are also considered. Performance evaluation for the relative speedup compared to the corresponding sequential execution on CPU is presented as well.
Experimental study of particle transport and density fluctuation in LHD
International Nuclear Information System (INIS)
Tanaka, K.; Michael, C.; Sanin, A.
2005-01-01
A variety of electron density (n e ) profiles have been observed in Large Helical Device (LHD). The density profiles change dramatically with heating power and toroidal magnetic field (B t ) under the same line averaged density. The particle transport coefficients, i.e., diffusion coefficient (D) and convection velocity (V) are experimentally obtained in the standard configuration from density modulation experiments. The values of D and V are estimated separately in the core and edge. The diffusion coefficients are found to be a strong function of electron temperature (T e ) and are proportional to T e 1.7±0.9 in the core and T e 1.1±0.14 in the edge. Edge diffusion coefficients are proportional to B t -2.08 . It is found that the scaling of D in the edge is close to gyro-Bohm-like in nature. Non-zero V is observed and it is found that the electron temperature gradient can drive particle convection, particularly in the core region. The convection velocity in the core reverses direction from inward to outward as the T e gradient increases. In the edge, convection is inward directed in most cases of the present data set. It shows a modest tendency, being proportional to T e gradient and remaining inward directed. However, the toroidal magnetic field also significantly affects the value and direction of V. The density fluctuation spectrum varies with heating power suggesting that it has an influence on particle transport. The value of K sub(perpendicular) ρ i is around 0.1, as expected for gyro-Bohm diffusion. Fluctuations are localized in both positive and negative density gradient regions of the hollow density profiles. The fluctuation power in each region is clearly distinguished having different phase velocity profiles. (author)
MCNP: a general Monte Carlo code for neutron and photon transport. Version 3A. Revision 2
International Nuclear Information System (INIS)
Briesmeister, J.F.
1986-09-01
This manual is a practical guide for the use of our general-purpose Monte Carlo code MCNP. The first chapter is a primer for the novice user. The second chapter describes the mathematics, data, physics, and Monte Carlo simulation found in MCNP. This discussion is not meant to be exhaustive - details of the particular techniques and of the Monte Carlo method itself will have to be found elsewhere. The third chapter shows the user how to prepare input for the code. The fourth chapter contains several examples, and the fifth chapter explains the output. The appendices show how to use MCNP on particular computer systems at the Los Alamos National Laboratory and also give details about some of the code internals that those who wish to modify the code may find useful. 57 refs
Quasi-Monte Carlo methods: applications to modeling of light transport in tissue
Schafer, Steven A.
1996-05-01
Monte Carlo modeling of light propagation can accurately predict the distribution of light in scattering materials. A drawback of Monte Carlo methods is that they converge inversely with the square root of the number of iterations. Theoretical considerations suggest that convergence which scales inversely with the first power of the number of iterations is possible. We have previously shown that one can obtain at least a portion of that improvement by using van der Corput sequences in place of a conventional pseudo-random number generator. Here, we present our further analysis, and show that quasi-Monte Carlo methods do have limited applicability to light scattering problems. We also discuss potential improvements which may increase the applicability.
International Nuclear Information System (INIS)
Krommes, J.A.; Kleva, R.G.; Oberman, C.
1978-05-01
A systematic theory is developed for the computation of electron transport in stochastic magnetic fields. Small scale magnetic perturbations arising, for example, from finite-β micro-instabilities are assumed to destroy the flux surfaces of a standard tokamak equilibrium. Because the magnetic lines then wander in a volume, electron radial flux is enhanced due to the rapid particle transport along as well as across the lines. By treating the magnetic lines as random variables, it is possible to develop a kinetic equation for the electron distribution function. This is solved approximately to yield the diffusion coefficient
Energy Technology Data Exchange (ETDEWEB)
Krommes, J.A.; Kleva, R.G.; Oberman, C.
1978-05-01
A systematic theory is developed for the computation of electron transport in stochastic magnetic fields. Small scale magnetic perturbations arising, for example, from finite-..beta.. micro-instabilities are assumed to destroy the flux surfaces of a standard tokamak equilibrium. Because the magnetic lines then wander in a volume, electron radial flux is enhanced due to the rapid particle transport along as well as across the lines. By treating the magnetic lines as random variables, it is possible to develop a kinetic equation for the electron distribution function. This is solved approximately to yield the diffusion coefficient.
Electrokinetic Particle Transport in Micro-Nanofluidics Direct Numerical Simulation Analysis
Qian, Shizhi
2012-01-01
Numerous applications of micro-/nanofluidics are related to particle transport in micro-/nanoscale channels, and electrokinetics has proved to be one of the most promising tools to manipulate particles in micro/nanofluidics. Therefore, a comprehensive understanding of electrokinetic particle transport in micro-/nanoscale channels is crucial to the development of micro/nano-fluidic devices. Electrokinetic Particle Transport in Micro-/Nanofluidics: Direct Numerical Simulation Analysis provides a fundamental understanding of electrokinetic particle transport in micro-/nanofluidics involving elect
Energy Technology Data Exchange (ETDEWEB)
Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.
2017-07-01
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.
Wieggers, R. C.; W. J. Goedheer,; M.R. Akdim,; F. Bijkerk,; Zegeling, P. A.
2008-01-01
We present a kinetic simulation of the plasma formed by photoionization in the intense flux of an extreme ultraviolet lithography (EUVL) light source. The model is based on the particle-in-cell plus Monte Carlo approach. The photoelectric effect and ionization by electron collisions are included.
Kotalczyk, G.; Kruis, F. E.
2017-07-01
Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named 'stochastic resolution' in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named 'random removal' in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.
Abdelgadir, Ahmed
2015-03-30
Recently, our group performed a set of direct numerical simulations (DNS) of soot formation and growth in a n-heptane three dimensional non-premixed jet flame [Attili et al., Proc. Comb. Inst, 35, 2015], [Attili et al., Comb. Flame, 161, 2014], [Bisetti et al.,Trans of the Royal Soc, 372, 2014]. The evolution of species relevant to soot formation and growth have been sampled along a large number of Lagrangian trajectories in the DNS. In this work, the DNS results are post-processed to compute the soot evolution along selected Lagrangian trajectories using a Monte Carlo method. An operator splitting approach is adopted to split the deterministic processes (nucleation, surface growth and oxidation) from coagulation, which is treated stochastically. The morphological properties of soot and the particlesize distribution are investigated. For trajectories that experience an early strong nucleation event, the particle size distribution is found to be bimodal, as the soot particles have enough time to coagulate and grow while it is unimodal for trajectories characterized by only late nucleation events. As a results, the average size distribution at two different crosswise positions in the flame is unimodal.
Energy Technology Data Exchange (ETDEWEB)
Biondo, Elliott D [ORNL; Ibrahim, Ahmad M [ORNL; Mosher, Scott W [ORNL; Grove, Robert E [ORNL
2015-01-01
Detailed radiation transport calculations are necessary for many aspects of the design of fusion energy systems (FES) such as ensuring occupational safety, assessing the activation of system components for waste disposal, and maintaining cryogenic temperatures within superconducting magnets. Hybrid Monte Carlo (MC)/deterministic techniques are necessary for this analysis because FES are large, heavily shielded, and contain streaming paths that can only be resolved with MC. The tremendous complexity of FES necessitates the use of CAD geometry for design and analysis. Previous ITER analysis has required the translation of CAD geometry to MCNP5 form in order to use the AutomateD VAriaNce reducTion Generator (ADVANTG) for hybrid MC/deterministic transport. In this work, ADVANTG was modified to support CAD geometry, allowing hybrid (MC)/deterministic transport to be done automatically and eliminating the need for this translation step. This was done by adding a new ray tracing routine to ADVANTG for CAD geometries using the Direct Accelerated Geometry Monte Carlo (DAGMC) software library. This new capability is demonstrated with a prompt dose rate calculation for an ITER computational benchmark problem using both the Consistent Adjoint Driven Importance Sampling (CADIS) method an the Forward Weighted (FW)-CADIS method. The variance reduction parameters produced by ADVANTG are shown to be the same using CAD geometry and standard MCNP5 geometry. Significant speedups were observed for both neutrons (as high as a factor of 7.1) and photons (as high as a factor of 59.6).
Energy Technology Data Exchange (ETDEWEB)
Barcellos, Luiz Felipe F.C.; Bodmann, Bardo E.J.; Vilhena, Marco T.M.B., E-mail: luizfelipe.fcb@gmail.com, E-mail: bardo.bodmann@ufrgs.br, E-mail: mtmbvilhena@gmail.com [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Grupo de Estudos Nucleares; Leite, Sergio Q. Bogado, E-mail: sbogado@ibest.com.br [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)
2017-07-01
In this work a Monte Carlo simulator with continuous energy is used. This simulator distinguishes itself by using the sum of three probability distributions to represent the neutron spectrum. Two distributions have known shape, but have varying population of neutrons in time, and these are the fission neutron spectrum (for high energy neutrons) and the Maxwell-Boltzmann distribution (for thermal neutrons). The third distribution has an a priori unknown and possibly variable shape with time and is determined from parametrizations of Monte Carlo simulation. It is common practice in neutron transport calculations, e.g. multi-group transport, to consider that the neutrons only lose energy with each scattering reaction and then to use a thermal group with a Maxwellian distribution. Such an approximation is valid due to the fact that for fast neutrons up-scattering occurrence is irrelevant, being only appreciable at low energies, i.e. in the thermal energy region, in which it can be regarded as a Maxwell-Boltzmann distribution for thermal equilibrium. In this work the possible neutron-matter interactions are simulated with exception of the up-scattering of neutrons. In order to preserve the thermal spectrum, neutrons are selected stochastically as being part of the thermal population and have an energy attributed to them taken from a Maxwellian distribution. It is then shown how this procedure can emulate the up-scattering effect by the increase in the neutron population kinetic energy. Since the simulator uses tags to identify the reactions it is possible not only to plot the distributions by neutron energy, but also by the type of interaction with matter and with the identification of the target nuclei involved in the process. This work contains some preliminary results obtained from a Monte Carlo simulator for neutron transport that is being developed at Federal University of Rio Grande do Sul. (author)
Helium, Iron and Electron Particle Transport and Energy Transport Studies on the TFTR Tokamak
Synakowski, E. J.; Efthimion, P. C.; Rewoldt, G.; Stratton, B. C.; Tang, W. M.; Grek, B.; Hill, K. W.; Hulse, R. A.; Johnson, D .W.; Mansfield, D. K.; McCune, D.; Mikkelsen, D. R.; Park, H. K.; Ramsey, A. T.; Redi, M. H.; Scott, S. D.; Taylor, G.; Timberlake, J.; Zarnstorff, M. C. (Princeton Univ., NJ (United States). Plasma Physics Lab.); Kissick, M. W. (Wisconsin Univ., Madison, WI (United States))
1993-03-01
Results from helium, iron, and electron transport on TFTR in L-mode and Supershot deuterium plasmas with the same toroidal field, plasma current, and neutral beam heating power are presented. They are compared to results from thermal transport analysis based on power balance. Particle diffusivities and thermal conductivities are radially hollow and larger than neoclassical values, except possibly near the magnetic axis. The ion channel dominates over the electron channel in both particle and thermal diffusion. A peaked helium profile, supported by inward convection that is stronger than predicted by neoclassical theory, is measured in the Supershot The helium profile shape is consistent with predictions from quasilinear electrostatic drift-wave theory. While the perturbative particle diffusion coefficients of all three species are similar in the Supershot, differences are found in the L-Mode. Quasilinear theory calculations of the ratios of impurity diffusivities are in good accord with measurements. Theory estimates indicate that the ion heat flux should be larger than the electron heat flux, consistent with power balance analysis. However, theoretical values of the ratio of the ion to electron heat flux can be more than a factor of three larger than experimental values. A correlation between helium diffusion and ion thermal transport is observed and has favorable implications for sustained ignition of a tokamak fusion reactor.
Helium, iron and electron particle transport and energy transport studies on the TFTR tokamak
International Nuclear Information System (INIS)
Synakowski, E.J.; Efthimion, P.C.; Rewoldt, G.; Stratton, B.C.; Tang, W.M.; Grek, B.; Hill, K.W.; Hulse, R.A.; Johnson, D.W.; Mansfield, D.K.; McCune, D.; Mikkelsen, D.R.; Park, H.K.; Ramsey, A.T.; Redi, M.H.; Scott, S.D.; Taylor, G.; Timberlake, J.; Zarnstorff, M.C.
1993-03-01
Results from helium, iron, and electron transport on TFTR in L-mode and Supershot deuterium plasmas with the same toroidal field, plasma current, and neutral beam heating power are presented. They are compared to results from thermal transport analysis based on power balance. Particle diffusivities and thermal conductivities are radially hollow and larger than neoclassical values, except possibly near the magnetic axis. The ion channel dominates over the electron channel in both particle and thermal diffusion. A peaked helium profile, supported by inward convection that is stronger than predicted by neoclassical theory, is measured in the Supershot The helium profile shape is consistent with predictions from quasilinear electrostatic drift-wave theory. While the perturbative particle diffusion coefficients of all three species are similar in the Supershot, differences are found in the L-Mode. Quasilinear theory calculations of the ratios of impurity diffusivities are in good accord with measurements. Theory estimates indicate that the ion heat flux should be larger than the electron heat flux, consistent with power balance analysis. However, theoretical values of the ratio of the ion to electron heat flux can be more than a factor of three larger than experimental values. A correlation between helium diffusion and ion thermal transport is observed and has favorable implications for sustained ignition of a tokamak fusion reactor
Helium, iron and electron particle transport and energy transport studies on the TFTR tokamak
Energy Technology Data Exchange (ETDEWEB)
Synakowski, E.J.; Efthimion, P.C.; Rewoldt, G.; Stratton, B.C.; Tang, W.M.; Grek, B.; Hill, K.W.; Hulse, R.A.; Johnson, D.W.; Mansfield, D.K.; McCune, D.; Mikkelsen, D.R.; Park, H.K.; Ramsey, A.T.; Redi, M.H.; Scott, S.D.; Taylor, G.; Timberlake, J.; Zarnstorff, M.C. (Princeton Univ., NJ (United States). Plasma Physics Lab.); Kissick, M.W. (Wisconsin Univ., Madison, WI (United States))
1993-03-01
Results from helium, iron, and electron transport on TFTR in L-mode and Supershot deuterium plasmas with the same toroidal field, plasma current, and neutral beam heating power are presented. They are compared to results from thermal transport analysis based on power balance. Particle diffusivities and thermal conductivities are radially hollow and larger than neoclassical values, except possibly near the magnetic axis. The ion channel dominates over the electron channel in both particle and thermal diffusion. A peaked helium profile, supported by inward convection that is stronger than predicted by neoclassical theory, is measured in the Supershot The helium profile shape is consistent with predictions from quasilinear electrostatic drift-wave theory. While the perturbative particle diffusion coefficients of all three species are similar in the Supershot, differences are found in the L-Mode. Quasilinear theory calculations of the ratios of impurity diffusivities are in good accord with measurements. Theory estimates indicate that the ion heat flux should be larger than the electron heat flux, consistent with power balance analysis. However, theoretical values of the ratio of the ion to electron heat flux can be more than a factor of three larger than experimental values. A correlation between helium diffusion and ion thermal transport is observed and has favorable implications for sustained ignition of a tokamak fusion reactor.
Aurora T: a Monte Carlo code for transportation of neutral atoms in a toroidal plasma
International Nuclear Information System (INIS)
Bignami, A.; Chiorrini, R.
1982-01-01
This paper contains a short description of Aurora code. This code have been developed at Princeton with Monte Carlo method for calculating neutral gas in cylindrical plasma. In this work subroutines such one can take in account toroidal geometry are developed
Transport, Acceleration and Spatial Access of Solar Energetic Particles
Borovikov, D.; Sokolov, I.; Effenberger, F.; Jin, M.; Gombosi, T. I.
2017-12-01
Solar Energetic Particles (SEPs) are a major branch of space weather. Often driven by Coronal Mass Ejections (CMEs), SEPs have a very high destructive potential, which includes but is not limited to disrupting communication systems on Earth, inflicting harmful and potentially fatal radiation doses to crew members onboard spacecraft and, in extreme cases, to people aboard high altitude flights. However, currently the research community lacks efficient tools to predict such hazardous SEP events. Such a tool would serve as the first step towards improving humanity's preparedness for SEP events and ultimately its ability to mitigate their effects. The main goal of the presented research is to develop a computational tool that provides the said capabilities and meets the community's demand. Our model has the forecasting capability and can be the basis for operational system that will provide live information on the current potential threats posed by SEPs based on observations of the Sun. The tool comprises several numerical models, which are designed to simulate different physical aspects of SEPs. The background conditions in the interplanetary medium, in particular, the Coronal Mass Ejection driving the particle acceleration, play a defining role and are simulated with the state-of-the-art MHD solver, Block-Adaptive-Tree Solar-wind Roe-type Upwind Scheme (BATS-R-US). The newly developed particle code, Multiple-Field-Line-Advection Model for Particle Acceleration (M-FLAMPA), simulates the actual transport and acceleration of SEPs and is coupled to the MHD code. The special property of SEPs, the tendency to follow magnetic lines of force, is fully taken advantage of in the computational model, which substitutes a complicated 3-D model with a multitude of 1-D models. This approach significantly simplifies computations and improves the time performance of the overall model. Also, it plays an important role of mapping the affected region by connecting it with the origin of
Energy Technology Data Exchange (ETDEWEB)
Kopp, Andreas [Université Libre de Bruxelles, Service de Physique Statistique et des Plasmas, CP 231, B-1050 Brussels (Belgium); Wiengarten, Tobias; Fichtner, Horst [Institut für Theoretische Physik IV, Ruhr-Universität Bochum, D-44780 Bochum (Germany); Effenberger, Frederic [Department of Physics and KIPAC, Stanford University, Stanford, CA 94305 (United States); Kühl, Patrick; Heber, Bernd [Institut für Experimentelle und Angewandte Physik, Christian-Albrecht-Universität zu Kiel, D-24098 Kiel (Germany); Raath, Jan-Louis; Potgieter, Marius S. [Centre for Space Research, North-West University, 2520 Potchefstroom (South Africa)
2017-03-01
The transport of cosmic rays (CRs) in the heliosphere is determined by the properties of the solar wind plasma. The heliospheric plasma environment has been probed by spacecraft for decades and provides a unique opportunity for testing transport theories. Of particular interest for the three-dimensional (3D) heliospheric CR transport are structures such as corotating interaction regions (CIRs), which, due to the enhancement of the magnetic field strength and magnetic fluctuations within and due to the associated shocks as well as stream interfaces, do influence the CR diffusion and drift. In a three-fold series of papers, we investigate these effects by modeling inner-heliospheric solar wind conditions with the numerical magnetohydrodynamic (MHD) framework Cronos (Wiengarten et al., referred as Paper I), and the results serve as input to a transport code employing a stochastic differential equation approach (this paper). While, in Paper I, we presented results from 3D simulations with Cronos, the MHD output is now taken as an input to the CR transport modeling. We discuss the diffusion and drift behavior of Galactic cosmic rays using the example of different theories, and study the effects of CIRs on these transport processes. In particular, we point out the wide range of possible particle fluxes at a given point in space resulting from these different theories. The restriction of this variety by fitting the numerical results to spacecraft data will be the subject of the third paper of this series.
Fundamentals of charged particle transport in gases and condensed matter
Robson, Robert E; Hildebrandt, Malte
2018-01-01
This book offers a comprehensive and cohesive overview of transport processes associated with all kinds of charged particles, including electrons, ions, positrons, and muons, in both gases and condensed matter. The emphasis is on fundamental physics, linking experiment, theory and applications. In particular, the authors discuss: The kinetic theory of gases, from the traditional Boltzmann equation to modern generalizations A complementary approach: Maxwell’s equations of change and fluid modeling Calculation of ion-atom scattering cross sections Extension to soft condensed matter, amorphous materials Applications: drift tube experiments, including the Franck-Hertz experiment, modeling plasma processing devices, muon catalysed fusion, positron emission tomography, gaseous radiation detectors Straightforward, physically-based arguments are used wherever possible to complement mathematical rigor.
Vectorising the detector geometry to optimize particle transport
Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
2014-01-01
Among the components contributing to particle transport, geometry navigation is an important consumer of CPU cycles. The tasks performed to get answers to "basic" queries such as locating a point within a geometry hierarchy or computing accurately the distance to the next boundary can become very computing intensive for complex detector setups. So far, the existing geometry algorithms employ mainly scalar optimisation strategies (voxelization, caching) to reduce their CPU consumption. In this paper, we would like to take a different approach and investigate how geometry navigation can benefit from the vector instruction set extensions that are one of the primary source of performance enhancements on current and future hardware. While on paper, this form of microparallelism promises increasing performance opportunities, applying this technology to the highly hierarchical and multiply branched geometry code is a difficult challenge. We refer to the current work done to vectorise an important part of the critica...
Coquelin, L.; Le Brusquet, L.; Fischer, N.; Gensdarmes, F.; Motzkus, C.; Mace, T.; Fleury, G.
2018-05-01
A scanning mobility particle sizer (SMPS) is a high resolution nanoparticle sizing system that is widely used as the standard method to measure airborne particle size distributions (PSD) in the size range 1 nm–1 μm. This paper addresses the problem to assess the uncertainty associated with PSD when a differential mobility analyzer (DMA) operates under scanning mode. The sources of uncertainty are described and then modeled either through experiments or knowledge extracted from the literature. Special care is brought to model the physics and to account for competing theories. Indeed, it appears that the modeling errors resulting from approximations of the physics can largely affect the final estimate of this indirect measurement, especially for quantities that are not measured during day-to-day experiments. The Monte Carlo method is used to compute the uncertainty associated with PSD. The method is tested against real data sets that are monosize polystyrene latex spheres (PSL) with nominal diameters of 100 nm, 200 nm and 450 nm. The median diameters and associated standard uncertainty of the aerosol particles are estimated as 101.22 nm ± 0.18 nm, 204.39 nm ± 1.71 nm and 443.87 nm ± 1.52 nm with the new approach. Other statistical parameters, such as the mean diameter, the mode and the geometric mean and associated standard uncertainty, are also computed. These results are then compared with the results obtained by SMPS embedded software.
Successful vectorization - reactor physics Monte Carlo code
International Nuclear Information System (INIS)
Martin, W.R.
1989-01-01
Most particle transport Monte Carlo codes in use today are based on the ''history-based'' algorithm, wherein one particle history at a time is simulated. Unfortunately, the ''history-based'' approach (present in all Monte Carlo codes until recent years) is inherently scalar and cannot be vectorized. In particular, the history-based algorithm cannot take advantage of vector architectures, which characterize the largest and fastest computers at the current time, vector supercomputers such as the Cray X/MP or IBM 3090/600. However, substantial progress has been made in recent years in developing and implementing a vectorized Monte Carlo algorithm. This algorithm follows portions of many particle histories at the same time and forms the basis for all successful vectorized Monte Carlo codes that are in use today. This paper describes the basic vectorized algorithm along with descriptions of several variations that have been developed by different researchers for specific applications. These applications have been mainly in the areas of neutron transport in nuclear reactor and shielding analysis and photon transport in fusion plasmas. The relative merits of the various approach schemes will be discussed and the present status of known vectorization efforts will be summarized along with available timing results, including results from the successful vectorization of 3-D general geometry, continuous energy Monte Carlo. (orig.)
Graphical User Interface for High Energy Multi-Particle Transport Project
National Aeronautics and Space Administration — Computer codes such as MCNPX now have the capability to transport most high energy particle types (34 particle types now supported in MCNPX) with energies extending...
Graphical User Interface for High Energy Multi-Particle Transport, Phase II
National Aeronautics and Space Administration — Computer codes such as MCNPX now have the capability to transport most high energy particle types (34 particle types now supported in MCNPX) with energies extending...
Graphical User Interface for High Energy Multi-Particle Transport, Phase I
National Aeronautics and Space Administration — Computer codes such as MCNPX now have the capability to transport most high energy particle types (34 particle types now supported in MCNPX) with energies extending...
[Citrate transport in submitochondrial particles of the rat liver].
Velikiĭ, N N; Sen'ko, L N; Babicheva, E I
1988-01-01
The submitochondrial particles (SMP, inverted inner membrane vesicles of mitochondria of the turned out vesicles in internal mitochondrial membranes) of the rat liver are characterized for their ability to incorporate [14C]citrate depending on the concentration of exogenic citrate, temperature and time of incubation. The rate of citrate incorporation into SMP does not depend on the addition of the oxidation substrate into the medium, however in the presence of malate and phosphate it is sharply activated. 1,2,3-benzene tricarboxylase (1,2,3-BTC) is an active inhibitor of the citrate transport into SMP. The citrate transport is determined by the protonation-deprotonation processes of the carrier protein on the outer and inner side of the membrane. A decrease in the pH of the medium favours protonation of the carrier protein on the outer side of the membrane and intensifies [14C]citrate incorporation into SMP, whereas the pH increase inhibits this process. The effect of pH changes is less pronounced in the presence of K+ ions. Valinomycin in the K+ medium activates incorporation of [14C]citrate increasing the carrier protein deprotonation rate on the inner side of the SMP membrane. Protonophore separators intensify conductivity for H+ ions and remove the stimulating influence of valinomycin on the rate of [14C]citrate incorporation into SMP.
Study of electron transport in n-type InAs substrate by Monte Carlo ...
African Journals Online (AJOL)
... et de l\\'effet de la concentration (dopage). Les résultats que nous avons obtenus s\\'avèrent comparables à ceux de la théorie. Mots-clés: Méthode de Monte Carlo, interactions, structure de bande, composants III-V. Abstract The microelectronic comprehension of the phenomena which describes the behavior of the carriers ...
International Nuclear Information System (INIS)
Pölz, Stefan; Laubersheimer, Sven; Eberhardt, Jakob S; Harrendorf, Marco A; Keck, Thomas; Benzler, Andreas; Breustedt, Bastian
2013-01-01
The basic idea of Voxel2MCNP is to provide a framework supporting users in modeling radiation transport scenarios using voxel phantoms and other geometric models, generating corresponding input for the Monte Carlo code MCNPX, and evaluating simulation output. Applications at Karlsruhe Institute of Technology are primarily whole and partial body counter calibration and calculation of dose conversion coefficients. A new generic data model describing data related to radiation transport, including phantom and detector geometries and their properties, sources, tallies and materials, has been developed. It is modular and generally independent of the targeted Monte Carlo code. The data model has been implemented as an XML-based file format to facilitate data exchange, and integrated with Voxel2MCNP to provide a common interface for modeling, visualization, and evaluation of data. Also, extensions to allow compatibility with several file formats, such as ENSDF for nuclear structure properties and radioactive decay data, SimpleGeo for solid geometry modeling, ImageJ for voxel lattices, and MCNPX’s MCTAL for simulation results have been added. The framework is presented and discussed in this paper and example workflows for body counter calibration and calculation of dose conversion coefficients is given to illustrate its application. (paper)
International Nuclear Information System (INIS)
Bauer, Thilo; Jäger, Christof M.; Jordan, Meredith J. T.; Clark, Timothy
2015-01-01
We have developed a multi-agent quantum Monte Carlo model to describe the spatial dynamics of multiple majority charge carriers during conduction of electric current in the channel of organic field-effect transistors. The charge carriers are treated by a neglect of diatomic differential overlap Hamiltonian using a lattice of hydrogen-like basis functions. The local ionization energy and local electron affinity defined previously map the bulk structure of the transistor channel to external potentials for the simulations of electron- and hole-conduction, respectively. The model is designed without a specific charge-transport mechanism like hopping- or band-transport in mind and does not arbitrarily localize charge. An electrode model allows dynamic injection and depletion of charge carriers according to source-drain voltage. The field-effect is modeled by using the source-gate voltage in a Metropolis-like acceptance criterion. Although the current cannot be calculated because the simulations have no time axis, using the number of Monte Carlo moves as pseudo-time gives results that resemble experimental I/V curves
Comparison of Space Radiation Calculations from Deterministic and Monte Carlo Transport Codes
Adams, J. H.; Lin, Z. W.; Nasser, A. F.; Randeniya, S.; Tripathi, r. K.; Watts, J. W.; Yepes, P.
2010-01-01
The presentation outline includes motivation, radiation transport codes being considered, space radiation cases being considered, results for slab geometry, results from spherical geometry, and summary. ///////// main physics in radiation transport codes hzetrn uprop fluka geant4, slab geometry, spe, gcr,
International Nuclear Information System (INIS)
Arreola V, G.; Vazquez R, R.; Guzman A, J. R.
2012-10-01
In this work a comparative analysis of the results for the neutrons dispersion in a not multiplicative semi-infinite medium is presented. One of the frontiers of this medium is located in the origin of coordinates, where a neutrons source in beam form, i.e., μο=1 is also. The neutrons dispersion is studied on the statistical method of Monte Carlo and through the unidimensional transport theory and for an energy group. The application of transport theory gives a semi-analytic solution for this problem while the statistical solution for the flow was obtained applying the MCNPX code. The dispersion in light water and heavy water was studied. A first remarkable result is that both methods locate the maximum of the neutrons distribution to less than two mean free trajectories of transport for heavy water, while for the light water is less than ten mean free trajectories of transport; the differences between both methods is major for the light water case. A second remarkable result is that the tendency of both distributions is similar in small mean free trajectories, while in big mean free trajectories the transport theory spreads to an asymptote value and the solution in base statistical method spreads to zero. The existence of a neutron current of low energy and toward the source is demonstrated, in contrary sense to the neutron current of high energy coming from the own source. (Author)
Efendiev, Yalchin R.
2013-08-21
In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate
Energy Technology Data Exchange (ETDEWEB)
Alxneit, I. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1999-08-01
The program RAY was developed to perform Monte-Carlo simulations of the flux distribution in solar reactors in connection with an arbitrary heliostat field. The code accounts for the shading of the incoming rays from the sun due to the reactor supporting tower as well as for full blocking and shading of the heliostats among themselves. A simplified falling particle reactor (FPR) was evaluated. A central receiver field was used with a total area of 311 m{sup 2} composed of 176 round, focusing heliostats. No attempt was undertaken to optimise either the geometry of the heliostat field nor the aiming strategy of the heliostats. The FPR was evaluated at two different geographic latitudes (-8.23W/47.542N; PSI and -8.23W/20.0N) and during the course of a day (May 30{sup th}). The incident power passing through the reactor aperture and the flux density distribution within the FPR was calculated. (author) 3 figs., 1 tab., 3 refs.
Mattei, S.; Nishida, K.; Onai, M.; Lettry, J.; Tran, M. Q.; Hatayama, A.
2017-12-01
We present a fully-implicit electromagnetic Particle-In-Cell Monte Carlo collision code, called NINJA, written for the simulation of inductively coupled plasmas. NINJA employs a kinetic enslaved Jacobian-Free Newton Krylov method to solve self-consistently the interaction between the electromagnetic field generated by the radio-frequency coil and the plasma response. The simulated plasma includes a kinetic description of charged and neutral species as well as the collision processes between them. The algorithm allows simulations with cell sizes much larger than the Debye length and time steps in excess of the Courant-Friedrichs-Lewy condition whilst preserving the conservation of the total energy. The code is applied to the simulation of the plasma discharge of the Linac4 H- ion source at CERN. Simulation results of plasma density, temperature and EEDF are discussed and compared with optical emission spectroscopy measurements. A systematic study of the energy conservation as a function of the numerical parameters is presented.
Approximate models for neutral particle transport in ducts with wall migration
Gonzalez, Arnulfo
The problem of monoenergetic neutral particle transport in a duct with wall migration for various shielding materials is treated using an approximate one-dimensional model and a Monte Carlo-based multivariate logistic regression model. The one-dimensional model is a third-order approximation in a hierarchy of approximations derived by a weighted residual procedure that accounts for wall migration by means of a kernel density. Physical constants required for the one-dimensional model--scattering probability (c) and the average distance traveled in walls (d) -- are calculated using MCNP's PTRAC and a corresponding parsing code. Numerical results for the one-dimensional model are based on a discrete ordinates solution and compared to MCNP. The logistic regression models are developed using the R language in statistical computing for three explanatory variables-- duct radius (r), length (L), and shield thickness plus inner radius (S)-- where each parameter is explored via univariate models. Data for the models is collected from MCNP via automated processes using Python and shell scripts. The logistic regression models lead to analytical expressions, which are evaluated by randomly dividing our data set into training and test sets, and calculating predictions.
Transport appraisal and Monte Carlo simulation by use of the CBA-DK model
DEFF Research Database (Denmark)
Salling, Kim Bang; Leleur, Steen
2011-01-01
calculation, where risk analysis is carried out using Monte Carlo simulation. Special emphasis has been placed on the separation between inherent randomness in the modeling system and lack of knowledge. These two concepts have been defined in terms of variability (ontological uncertainty) and uncertainty...... (epistemic uncertainty). After a short introduction to deterministic calculation resulting in some evaluation criteria a more comprehensive evaluation of the stochastic calculation is made. Especially, the risk analysis part of CBA-DK, with considerations about which probability distributions should be used...
Energy Technology Data Exchange (ETDEWEB)
Pazianotto, Mauricio Tizziani; Carlson, Brett Vern [Instituto Tecnologico de Aeronautica (ITA), Sao Jose dos Campos, SP (Brazil); Federico, Claudio Antonio; Goncalez, Odair Lelis [Centro Tecnico Aeroespacial (CTA), Sao Jose dos Campos, SP (Brazil). Instituto de Estudos Avancados
2011-07-01
Full text: Great effort is required to understand better the cosmic radiation (CR) dose received by sensitive equipment, on-board computers and aircraft crew members at Brazil airspace, because there is a large area of South America and Brazil subject to the South Atlantic Anomaly (SAA). High energy neutrons are produced by interactions between primary cosmic ray and atmospheric atoms, and also undergo moderation resulting in a wider spectrum of energy ranging from thermal energies (0:025eV ) to energies of several hundreds of MeV. Measurements of the cosmic radiation dose on-board aircrafts need to be followed with an integral flow monitor on the ground level in order to register CR intensity variations during the measurements. The Long Counter (LC) neutron detector was designed as a directional neutron flux meter standard because it presents fairly constant response for energy under 10MeV. However we would like to use it as a ground based neutron monitor for cosmic ray induced neutron spectrum (CRINS) that presents an isotropic fluency and a wider spectrum of energy. The LC was modeled and tested using a Monte Carlo transport simulation for irradiations with known neutron sources ({sup 241}Am-Be and {sup 251}Cf) as a benchmark. Using this geometric model its efficiency was calculated to CRINS isotropic flux, introducing high energy neutron interactions models. The objective of this work is to present the model for simulation of the isotropic neutron source employing the MCNPX code (Monte Carlo N-Particle eXtended) and then access the LC efficiency to compare it with experimental results for cosmic ray neutrons measures on ground level. (author)
DANTSYS: a system for deterministic, neutral particle transport calculations
Energy Technology Data Exchange (ETDEWEB)
Alcouffe, R.E.; Baker, R.S.
1996-12-31
The THREEDANT code is the latest addition to our system of codes, DANTSYS, which perform neutral particle transport computations on a given system of interest. The system of codes is distinguished by geometrical or symmetry considerations. For example, ONEDANT and TWODANT are designed for one and two dimensional geometries respectively. We have TWOHEX for hexagonal geometries, TWODANT/GQ for arbitrary quadrilaterals in XY and RZ geometry, and THREEDANT for three-dimensional geometries. The design of this system of codes is such that they share the same input and edit module and hence the input and output is uniform for all the codes (with the obvious additions needed to specify each type of geometry). The codes in this system are also designed to be general purpose solving both eigenvalue and source driven problems. In this paper we concentrate on the THREEDANT module since there are special considerations that need to be taken into account when designing such a module. The main issues that need to be addressed in a three-dimensional transport solver are those of the computational time needed to solve a problem and the amount of storage needed to accomplish that solution. Of course both these issues are directly related to the number of spatial mesh cells required to obtain a solution to a specified accuracy, but is also related to the spatial discretization method chosen and the requirements of the iteration acceleration scheme employed as will be noted below. Another related consideration is the robustness of the resulting algorithms as implemented; because insistence on complete robustness has a significant impact upon the computation time. We address each of these issues in the following through which we give reasons for the choices we have made in our approach to this code. And this is useful in outlining how the code is evolving to better address the shortcomings that presently exist.
Use of implicit Monte Carlo radiation transport with hydrodynamics and compton scattering
International Nuclear Information System (INIS)
Fleck, J.A. Jr.
1971-03-01
It is shown that the combination of implicit radiation transport and hydrodynamics, Compton scattering, and any other energy transport can be simply carried out by a ''splitting'' procedure. Contributions to material energy exchange can be reckoned separately for hydrodynamics, radiation transport without scattering, Compton scattering, plus any other possible energy exchange mechanism. The radiation transport phase of the calculation would be implicit, but the hydrodynamics and Compton portions would not, leading to possible time step controls. The time step restrictions which occur on radiation transfer due to large Planck mean absorption cross-sections would not occur
Dynamics and transport of laser-accelerated particle beams
Energy Technology Data Exchange (ETDEWEB)
Becker, Stefan
2010-04-19
The subject of this thesis is the investigation and optimization of beam transport elements in the context of the steadily growing field of laser-driven particle acceleration. The first topic is the examination of the free vacuum expansion of an electron beam at high current density. It could be shown that particle tracking codes which are commonly used for the calculation of space charge effects will generate substantial artifacts in the regime considered here. The artifacts occurring hitherto predominantly involve insufficient prerequisites for the Lorentz transformation, the application of inadequate initial conditions and non negligible retardation artifacts. A part of this thesis is dedicated to the development of a calculation approach which uses a more adequate ansatz calculating space charge effects for laser-accelerated electron beams. It can also be used to validate further approaches for the calculation of space charge effects. The next elements considered are miniature magnetic quadrupole devices for the focusing of charged particle beams. General problems involved with their miniaturization concern distorting higher order field components. If these distorting components cannot be controlled, the field of applications is very limited. In this thesis a new method for the characterization and compensation of the distorting components was developed, which might become a standard method when assembling these permanent magnet multipole devices. The newly developed characterization method has been validated at the Mainz Microtron (MAMI) electron accelerator. Now that we can ensure optimum performance, the first application of permanent magnet quadrupole devices in conjunction with laser-accelerated ion beams is presented. The experiment was carried out at the Z-Petawatt laser system at Sandia National Laboratories. A promising application for laser-accelerated electron beams is the FEL in a university-scale size. The first discussion of all relevant aspects
Dynamics and transport of laser-accelerated particle beams
International Nuclear Information System (INIS)
Becker, Stefan
2010-01-01
The subject of this thesis is the investigation and optimization of beam transport elements in the context of the steadily growing field of laser-driven particle acceleration. The first topic is the examination of the free vacuum expansion of an electron beam at high current density. It could be shown that particle tracking codes which are commonly used for the calculation of space charge effects will generate substantial artifacts in the regime considered here. The artifacts occurring hitherto predominantly involve insufficient prerequisites for the Lorentz transformation, the application of inadequate initial conditions and non negligible retardation artifacts. A part of this thesis is dedicated to the development of a calculation approach which uses a more adequate ansatz calculating space charge effects for laser-accelerated electron beams. It can also be used to validate further approaches for the calculation of space charge effects. The next elements considered are miniature magnetic quadrupole devices for the focusing of charged particle beams. General problems involved with their miniaturization concern distorting higher order field components. If these distorting components cannot be controlled, the field of applications is very limited. In this thesis a new method for the characterization and compensation of the distorting components was developed, which might become a standard method when assembling these permanent magnet multipole devices. The newly developed characterization method has been validated at the Mainz Microtron (MAMI) electron accelerator. Now that we can ensure optimum performance, the first application of permanent magnet quadrupole devices in conjunction with laser-accelerated ion beams is presented. The experiment was carried out at the Z-Petawatt laser system at Sandia National Laboratories. A promising application for laser-accelerated electron beams is the FEL in a university-scale size. The first discussion of all relevant aspects
Kouznetsov, A.; Cully, C. M.
2017-12-01
During enhanced magnetic activities, large ejections of energetic electrons from radiation belts are deposited in the upper polar atmosphere where they play important roles in its physical and chemical processes, including VLF signals subionospheric propagation. Electron deposition can affect D-Region ionization, which are estimated based on ionization rates derived from energy depositions. We present a model of D-region ion production caused by an arbitrary (in energy and pitch angle) distribution of fast (10 keV - 1 MeV) electrons. The model relies on a set of pre-calculated results obtained using a general Monte Carlo approach with the latest version of the MCNP6 (Monte Carlo N-Particle) code for the explicit electron tracking in magnetic fields. By expressing those results using the ionization yield functions, the pre-calculated results are extended to cover arbitrary magnetic field inclinations and atmospheric density profiles, allowing ionization rate altitude profile computations in the range of 20 and 200 km at any geographic point of interest and date/time by adopting results from an external atmospheric density model (e.g. NRLMSISE-00). The pre-calculated MCNP6 results are stored in a CDF (Common Data Format) file, and IDL routines library is written to provide an end-user interface to the model.
Iwamoto, Yosuke
2018-03-01
In this study, the Monte Carlo displacement damage calculation method in the Particle and Heavy-Ion Transport code System (PHITS) was improved to calculate displacements per atom (DPA) values due to irradiation by electrons (or positrons) and gamma rays. For the damage due to electrons and gamma rays, PHITS simulates electromagnetic cascades using the Electron Gamma Shower version 5 (EGS5) algorithm and calculates DPA values using the recoil energies and the McKinley-Feshbach cross section. A comparison of DPA values calculated by PHITS and the Monte Carlo assisted Classical Method (MCCM) reveals that they were in good agreement for gamma-ray irradiations of silicon and iron at energies that were less than 10 MeV. Above 10 MeV, PHITS can calculate DPA values not only for electrons but also for charged particles produced by photonuclear reactions. In DPA depth distributions under electron and gamma-ray irradiations, build-up effects can be observed near the target's surface. For irradiation of 90-cm-thick carbon by protons with energies of more than 30 GeV, the ratio of the secondary electron DPA values to the total DPA values is more than 10% and increases with an increase in incident energy. In summary, PHITS can calculate DPA values for all particles and materials over a wide energy range between 1 keV and 1 TeV for electrons, gamma rays, and charged particles and between 10-5 eV and 1 TeV for neutrons.
International Nuclear Information System (INIS)
Brantley, Patrick S.; Martos, Jenny N.
2011-01-01
We describe a parallel benchmark procedure and numerical results for a three-dimensional binary stochastic medium particle transport benchmark problem. The binary stochastic medium is composed of optically thick spherical inclusions distributed in an optically thin background matrix material. We investigate three sphere mean chord lengths, three distributions for the sphere radii (constant, uniform, and exponential), and six sphere volume fractions ranging from 0.05 to 0.3. For each sampled independent material realization, we solve the associated transport problem using the Mercury Monte Carlo particle transport code. We compare the ensemble-averaged benchmark fiducial tallies of reflection from and transmission through the spatial domain as well as absorption in the spherical inclusion and background matrix materials. For the parameter values investigated, we find a significant dependence of the ensemble-averaged fiducial tallies on both sphere mean chord length and sphere volume fraction, with the most dramatic variation occurring for the transmission through the spatial domain. We find a weaker dependence of most benchmark tally quantities on the distribution describing the sphere radii, provided the sphere mean chord length used is the same in the different distributions. The exponential distribution produces larger differences from the constant distribution than the uniform distribution produces. The transmission through the spatial domain does exhibit a significant variation when an exponential radius distribution is used. (author)
International Nuclear Information System (INIS)
Meng, Jianxin; Mei, Deqing; Yang, Keji; Fan, Zongwei
2014-01-01
In existing ultrasonic transportation methods, the long-range transportation of micro-particles is always realized in step-by-step way. Due to the substantial decrease of the driving force in each step, the transportation is lower-speed and stair-stepping. To improve the transporting velocity, a non-stepping ultrasonic transportation approach is proposed. By quantitatively analyzing the acoustic potential well, an optimal region is defined as the position, where the largest driving force is provided under the condition that the driving force is simultaneously the major component of an acoustic radiation force. To keep the micro-particle trapped in the optimal region during the whole transportation process, an approach of optimizing the phase-shifting velocity and phase-shifting step is adopted. Due to the stable and large driving force, the displacement of the micro-particle is an approximately linear function of time, instead of a stair-stepping function of time as in the existing step-by-step methods. An experimental setup is also developed to validate this approach. Long-range ultrasonic transportations of zirconium beads with high transporting velocity were realized. The experimental results demonstrated that this approach is an effective way to improve transporting velocity in the long-range ultrasonic transportation of micro-particles
Transient Particle Transport Analysis on TJ-II Stellarator
Energy Technology Data Exchange (ETDEWEB)
Eguilior, S.; Castejon, F.; Guasp, J.; Estrada, T.; Medina, F.; Tabares, F.L.; Branas, B.
2006-12-18
Particle diffusivity and convective velocity have been determined in ECRH plasmas confined in the stellarator TJ-II by analysing the evolving density profile. This is obtained from an amplitude modulation reflectometry system in addition to an X-ray tomographic reconstruction. The source term, which is needed as an input for transport equations, is obtained using EIRENE code. In order to discriminate between the diffusive and convective contributions, the dynamics of the density evolution has been analysed in several perturbative experiments. This evolution has been considered in discharges with injection of a single pulse of H2 as well as in those that present a spontaneous transition to an enhanced confinement mode and whose confinement properties are modified by inducing an ohmic current. The pinch velocity and diffusivity are parameterized by different expressions in order to fit the experimental time evolution of density profile. The profile evolution is very different from one case to another due to the different values of convective velocities and diffusivities, besides the different source terms. (Author) 19 refs.
Vectorising the detector geometry to optimise particle transport
International Nuclear Information System (INIS)
Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro
2014-01-01
Among the components contributing to particle transport, geometry navigation is an important consumer of CPU cycles. The tasks performed to get answers to 'basic' queries such as locating a point within a geometry hierarchy or computing accurately the distance to the next boundary can become very computing intensive for complex detector setups. So far, the existing geometry algorithms employ mainly scalar optimisation strategies (voxelisation, caching) to reduce their CPU consumption. In this paper, we would like to take a different approach and investigate how geometry navigation can benefit from the vector instruction set extensions that are one of the primary source of performance enhancements on current and future hardware. While on paper, this form of microparallelism promises increasing performance opportunities, applying this technology to the highly hierarchical and multiply branched geometry code is a difficult challenge. We refer to the current work done to vectorise an important part of the critical navigation algorithms in the ROOT geometry library. Starting from a short critical discussion about the programming model, we present the current status and first benchmark results of the vectorisation of some elementary geometry shape algorithms. On the path towards a full vector-based geometry navigator, we also investigate the performance benefits in connecting these elementary functions together to develop algorithms which are entirely based on the flow of vector-data. To this end, we discuss core components of a simple vector navigator that is tested and evaluated on a toy detector setup.
Jolitz, R. D.; Dong, C. F.; Lee, C. O.; Lillis, R. J.; Brain, D. A.; Curry, S. M.; Bougher, S.; Parkinson, C. D.; Jakosky, B. M.
2017-05-01
Solar energetic particles (SEPs) can precipitate directly into the atmospheres of weakly magnetized planets, causing increased ionization, heating, and altered neutral chemistry. However, strong localized crustal magnetism at Mars can deflect energetic charged particles and reduce precipitation. In order to quantify these effects, we have developed a model of proton transport and energy deposition in spatially varying magnetic fields, called Atmospheric Scattering of Protons and Energetic Neutrals. We benchmark the model's particle tracing algorithm, collisional physics, and heating rates, comparing against previously published work in the latter two cases. We find that energetic nonrelativistic protons precipitating in proximity to a crustal field anomaly will primarily deposit energy at either their stopping altitude or magnetic reflection altitude. We compared atmospheric ionization in the presence and absence of crustal magnetic fields at 50°S and 182°E during the peak flux of the 29 October 2003 "Halloween storm" SEP event. The presence of crustal magnetic fields reduced total ionization by 30% but caused ionization to occur over a wider geographic area.
Argento, D.; Reedy, R. C.; Stone, J.
2010-12-01
Cosmogenic Nuclides (CNs) are a critical new tool for geomorphology, allowing researchers to date Earth surface events and measure process rates [1]. Prior to CNs, many of these events and processes had no absolute method for measurement and relied entirely on relative methods [2]. Continued improvements in CN methods are necessary for expanding analytic capability in geomorphology. In the last two decades, significant progress has been made in refining these methods and reducing analytic uncertainties [1,3]. Calibration data and scaling methods are being developed to provide a self consistent platform for use in interpreting nuclide concentration values into geologic data [4]. However, nuclide dependent scaling has been difficult to address due to analytic uncertainty and sparseness in altitude transects. Artificial target experiments are underway, but these experiments take considerable time for nuclide buildup in lower altitudes. In this study, a Monte Carlo method radiation transport code, MCNPX, is used to model the galactic cosmic-ray radiation impinging on the upper atmosphere and track the resulting secondary particles through a model of the Earth’s atmosphere and lithosphere. To address the issue of nuclide dependent scaling, the neutron flux values determined by the MCNPX simulation are folded in with estimated cross-section values [5,6]. Preliminary calculations indicate that scaling of nuclide production potential in free air seems to be a function of both altitude and nuclide production pathway. At 0 g/cm2 (sea-level) all neutron spallation pathways have attenuation lengths within 1% of 130 g/cm2. However, the differences in attenuation length are exacerbated with increasing altitude. At 530 g/cm2 atmospheric height (~5,500 m), the apparent attenuation lengths for aggregate SiO2(n,x)10Be, aggregate SiO2(n,x)14C and K(n,x)36Cl become 149.5 g/cm2, 151 g/cm2 and 148 g/cm2 respectively. At 700 g/cm2 atmospheric height (~8,400m - close to the highest
International Nuclear Information System (INIS)
Ganesan, S.
2009-01-01
In this write-up, some of the basic issues of nuclear data physics in Monte Carlo simulation of neutron transport in the Indian context are dealt with. In this lecture, some of the aspects associated with usage of the ENDF/B system, and of the PREPRO code system developed by D.E. Cullen and distributed by the IAEA Nuclear Data Section are briefly touched upon. Some aspects of the SIGACE code system which was developed by the author in collaboration with IPR, Ahmedabad and the IAEA Nuclear Data Section are also briefly covered. The validation of the SIGACE package included investigations using the NJOY and the MCNP compatible ACE files. Appendix-1 of the paper provides some useful discussions pointing out that voluminous and high-quality nuclear physics data required for nuclear applications usually evolve from a national effort to provide state-of-the-art data that are based upon established needs and uncertainties. Appendix-2 deals with some interesting work that was carried out using the SIGACE Code for Generating High Temperature ACE Files. Appendix-3 mentions briefly Integral nuclear data validation studies and use of Monte Carlo codes and nuclear data. Appendix-4 provides a brief summary report on selected Indian nuclear data physics activities for the interested reader in the light of BARC/DAE treating the subject area of nuclear data physics as a thrust area in our atomic energy programme
d'Enterria, David; McCauley, Thomas; Pierog, Tanguy
2010-01-01
We present and compare the predictions of various cosmic-ray Monte Carlo models for the energy (dE/deta) and particle (dN/deta) flows in p-p, p-Pb and Pb-Pb collisions at sqrt(s) = 14, 8.8, and 5.5 TeV respectively, in the range covered by forward LHC detectors like CASTOR or TOTEM (5.28.1 for neutrals).
International Nuclear Information System (INIS)
Sdouz, G.
1980-09-01
The computer program STOSS determines the path of a particle in a heterogenous medium in three dimensions. The program can be used as a module in Monte-Carlo-calculations. The collision can be transferred from the centre-of-mass system into a fixed cartesian coordinate-system by means of appropriate transformations. Then the path length is determined and the location of the next collision is calculated. The computational details are discussed at some length. (auth.)
Density Dependence of Particle Transport in ECH Plasmas of the TJ-II Stellarator
Energy Technology Data Exchange (ETDEWEB)
Vargas, V. I.; Lopez-Bruna, D.; Guasp, J.; Herranz, J.; Estrada, T.; Medina, F.; Ochando, M.A.; Velasco, J.L.; Reynolds, J.M.; Ferreira, J.A.; Tafalla, D.; Castejon, F.; Salas, A.
2009-05-21
We present the experimental dependence of particle transport on average density in electron cyclotron heated (ECH) hydrogen plasmas of the TJ-II stellarator. The results are based on: (I) electron density and temperature data from Thomson Scattering and reflectometry diagnostics; (II) a transport model that reproduces the particle density profiles in steady state; and (III) Eirene, a code for neutrals transport that calculates the particle source in the plasma from the particle confinement time and the appropriate geometry of the machine/plasma. After estimating an effective particle diffusivity and the particle confinement time, a threshold density separating qualitatively and quantitatively different plasma transport regimes is found. The poor confinement times found below the threshold are coincident with the presence of ECH-induced fast electron losses and a positive radial electric field all over the plasma. (Author) 40 refs.
Vectorization of continuous energy Monte Carlo method for neutron transport calculation
International Nuclear Information System (INIS)
Mori, Takamasa; Nakagawa, Masayuki; Sasaki, Makoto
1992-01-01
The vectorization method was studied to achieve a high efficiency for the precise physics model used in the continuous energy Monte Carlo method. The collision analysis task was reconstructed on the basis of the event based algorithm, and the stack-driven zone-selection method was applied to the vectorization of random walk simulation. These methods were installed into the vectorized continuous energy MVP code for general purpose uses. Performance of the present method was evaluated by comparison with conventional scalar codes VIM and MCNP for two typical problems. The MVP code achieved a vectorization ratio of more than 95% and a computation speed faster by a factor of 8∼22 on the FACOM VP-2600 vector supercomputer compared with the conventional scalar codes. (author)
International Nuclear Information System (INIS)
Homma, Y.; Moriwaki, H.; Ikeda, K.; Ohdi, S.
2013-01-01
This paper deals with the verification of the 3 dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with the multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at the beginning of cycle of an initial core and at the beginning and the end of cycle of an equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multiplication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity. (authors)
Energy Technology Data Exchange (ETDEWEB)
Lee, Ki Bog; Kim, Yeong Il; Kim, Kang Seok; Kim, Sang Ji; Kim, Young Gyun; Song, Hoon; Lee, Dong Uk; Lee, Byoung Oon; Jang, Jin Wook; Lim, Hyun Jin; Kim, Hak Sung
2004-05-01
In this report, the results of KALIMER (Korea Advanced LIquid MEtal Reactor) core design calculated by the K-CORE computing system are compared and analyzed with those of MCDEP calculation. The effective multiplication factor, flux distribution, fission power distribution and the number densities of the important nuclides effected from the depletion calculation for the R-Z model and Hex-Z model of KALIMER core are compared. It is confirmed that the results of K-CORE system compared with those of MCDEP based on the Monte Carlo transport theory method agree well within 700 pcm for the effective multiplication factor estimation and also within 2% in the driver fuel region, within 10% in the radial blanket region for the reaction rate and the fission power density. Thus, the K-CORE system for the core design of KALIMER by treating the lumped fission product and mainly important nuclides can be used as a core design tool keeping the necessary accuracy.
International Nuclear Information System (INIS)
Synakowski, E.J.; Efthimion, P.C.; Rewoldt, G.; Stratton, B.C.; Tang, W.M.; Bell, R.E.; Grek, B.; Hulse, R.A.; Johnson, D.W.; Hill, K.W.; Mansfield, D.K.; McCune, D.; Mikkelsen, D.R.; Park, H.K.; Ramsey, A.T.; Scott, S.D.; Taylor, G.; Timberlake, J.; Zarnstorff, M.C.
1992-01-01
Particle and energy transport in tokamak plasmas have long been subjects of vigorous investigation. Present-day measurement techniques permit radially resolved studies of the transport of electron perturbations, low- and high-Z impurities, and energy. In addition, developments in transport theory provide tools that can be brought to bear on transport issues. Here, we examine local particle transport measurements of electrons, fully-stripped thermal helium, and helium-like iron in balanced-injection L-mode and enhanced confinement deuterium plasmas on TFTR of the same plasma current, toroidal field, and auxiliary heating power. He 2+ and Fe 24+ transport has been studied with charge exchange recombination spectroscopy, while electron transport has been studied by analyzing the perturbed electron flux following the same helium puff used for the He 2+ studies. By examining the electron and He 2+ responses following the same gas puff in the same plasmas, an unambiguous comparison of the transport of the two species has been made. The local energy transport has been examined with power balance analysis, allowing for comparisons to the local thermal fluxes. Some particle and energy transport results from the Supershot have been compared to a transport model based on a quasilinear picture of electrostatic toroidal drift-type microinstabilities. Finally, implications for future fusion reactors of the observed correlation between thermal transport and helium particle transport is discussed
Particle transport in subaqueous eruptions: An experimental investigation
Verolino, A.; White, J. D. L.; Zimanowski, B.
2018-01-01
Subaqueous volcanic eruptions are natural events common under the world's oceans. Here we report results from bench-scale underwater explosions that entrain and eject particles into a water tank. Our aim was to examine how particles are transferred to the water column and begin to sediment from it, and to visualize and interpret evolution of the 'eruption' cloud. Understanding particle transfer to water is a key requirement for using deposit characteristics to infer behaviour and evolution of an underwater eruption. For the experiments here, we used compressed argon to force different types of particles, under known driving pressures, into water within a container, and recorded the results at 1 MPx/frame and 1000 fps. Three types of runs were completed: (1) particles within water were driven into a water-filled container; (2) dry particles were driven into water; (3) dry particles were driven into air at atmospheric pressure. Across the range of particles used for all subaqueous runs, we observed: a) initial doming, b) a main expansion of decompressing gas, and c) a phase of necking, when a forced plume separated from the driving jet. Phase c did not take place for the subaerial runs. A key observation is that none of the subaqueous explosions produced a single, simple, open cavity; in all cases, multiphase mixtures of gas bubbles, particles and water were formed. Explosions in which the expanding argon ejects particles in air, analogous to delivery of particles created in an explosion, produce jets and forced plumes that release particles into the tank more readily than do those in which particles in water are driven into the tank. The latter runs mimic propulsion of an existing vent slurry by an explosion. Explosions with different particle types also yielded differences in behaviour controlled primarily by particle mass, particle density, and particle-population homogeneity. Particles were quickly delivered into the water column during plume rise following
Convective and diffusive effects on particle transport in asymmetric periodic capillaries.
Directory of Open Access Journals (Sweden)
Nazmul Islam
Full Text Available We present here results of a theoretical investigation of particle transport in longitudinally asymmetric but axially symmetric capillaries, allowing for the influence of both diffusion and convection. In this study we have focused attention primarily on characterizing the influence of tube geometry and applied hydraulic pressure on the magnitude, direction and rate of transport of particles in axi-symmetric, saw-tooth shaped tubes. Three initial value problems are considered. The first involves the evolution of a fixed number of particles initially confined to a central wave-section. The second involves the evolution of the same initial state but including an ongoing production of particles in the central wave-section. The third involves the evolution of particles a fully laden tube. Based on a physical model of convective-diffusive transport, assuming an underlying oscillatory fluid velocity field that is unaffected by the presence of the particles, we find that transport rates and even net transport directions depend critically on the design specifics, such as tube geometry, flow rate, initial particle configuration and whether or not particles are continuously introduced. The second transient scenario is qualitatively independent of the details of how particles are generated. In the third scenario there is no net transport. As the study is fundamental in nature, our findings could engender greater understanding of practical systems.
Energy Technology Data Exchange (ETDEWEB)
Sewerynek, Stephen; /British Columbia U.
2007-04-06
The BABAR experiment is composed of an international collaboration that will test the Standard Model prediction of CP violation. To accomplish this a new detector was constructed at the asymmetric B Factory, located at the Stanford Linear Accelerator Center. The tests will shed some light on the origins of CP violation, which is an important aspect in explaining the matter/antimatter asymmetry in the universe. In particular, the BABAR experiment will measure CP violation in the neutral B meson system. In order to succeed, the BABAR experiment requires excellent track fitting and particle species identification. Prior to the current study, track fitting was done using only one particle species--the pion. But given the momentum dependence on the accuracy of the results from this choice of particle species, a better algorithm needed to be developed. Monte Carlo simulations were carried out and a new algorithm utilizing all five particle species present in the BABAR detector was created.
Automated Monte Carlo biasing for photon-generated electrons near surfaces.
Energy Technology Data Exchange (ETDEWEB)
Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick
2009-09-01
This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.
Electron cyclotron absorption in Tokamak plasmas in the presence of radial transport of particles
International Nuclear Information System (INIS)
Rosa, Paulo R. da S.; Ziebell, Luiz F.
1998-01-01
We use quasilinear theory to study effects of particle radial transport on the electron cyclotron absorption coefficient by a current carrying plasma, in a tokamak modelated as a plasma slab. Our numerical results indicate significant modification in the profile of the electron cyclotron absorption coefficient when transport is taken into account relative to the situation without transport. (author)
Geant4-related R&D for new particle transport methods
Augelli, M; Evans, T; Gargioni, E; Hauf, S; Kim, C H; Kuster, M; Pia, M G; Filho, P Queiroz; Quintieri, L; Saracco, P; Santos, D Souza; Weidenspointner, G; Zoglauer, A
2009-01-01
A R&D project has been launched in 2009 to address fundamental methods in radiation transport simulation and revisit Geant4 kernel design to cope with new experimental requirements. The project focuses on simulation at different scales in the same experimental environment: this set of problems requires new methods across the current boundaries of condensed-random-walk and discrete transport schemes. An exploration is also foreseen about exploiting and extending already existing Geant4 features to apply Monte Carlo and deterministic transport methods in the same simulation environment. An overview of this new R&D associated with Geant4 is presented, together with the first developments in progress.
Monte Carlo applications to radiation shielding problems
International Nuclear Information System (INIS)
Subbaiah, K.V.
2009-01-01
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling of physical and mathematical systems to compute their results. However, basic concepts of MC are both simple and straightforward and can be learned by using a personal computer. Uses of Monte Carlo methods require large amounts of random numbers, and it was their use that spurred the development of pseudorandom number generators, which were far quicker to use than the tables of random numbers which had been previously used for statistical sampling. In Monte Carlo simulation of radiation transport, the history (track) of a particle is viewed as a random sequence of free flights that end with an interaction event where the particle changes its direction of movement, loses energy and, occasionally, produces secondary particles. The Monte Carlo simulation of a given experimental arrangement (e.g., an electron beam, coming from an accelerator and impinging on a water phantom) consists of the numerical generation of random histories. To simulate these histories we need an interaction model, i.e., a set of differential cross sections (DCS) for the relevant interaction mechanisms. The DCSs determine the probability distribution functions (pdf) of the random variables that characterize a track; 1) free path between successive interaction events, 2) type of interaction taking place and 3) energy loss and angular deflection in a particular event (and initial state of emitted secondary particles, if any). Once these pdfs are known, random histories can be generated by using appropriate sampling methods. If the number of generated histories is large enough, quantitative information on the transport process may be obtained by simply averaging over the simulated histories. The Monte Carlo method yields the same information as the solution of the Boltzmann transport equation, with the same interaction model, but is easier to implement. In particular, the simulation of radiation
Nonlinear heat and particle transport due to collisional drift waves
Energy Technology Data Exchange (ETDEWEB)
Nishi-kawa, K.I.; Hatori, T.; Terashima, Y.
1977-03-01
The nonlinear evolution of unstable modes which govern transport processes in magnetically confined plasmas were investigated. A nonlinear theory of unstable collisional drift wave, and the consequent nonlinear transport were extended to include electron and ion temperature gradients. Thermal transport properties are discussed and basic equations are given.
Drummond, Jen; Davies-Colley, Rob; Stott, Rebecca; Sukias, James; Nagels, John; Sharp, Alice; Packman, Aaron
2014-05-01
Transport dynamics of microbial cells and organic fine particles are important to stream ecology and biogeochemistry. Cells and particles continuously deposit and resuspend during downstream transport owing to a variety of processes including gravitational settling, interactions with in-stream structures or biofilms at the sediment-water interface, and hyporheic exchange and filtration within underlying sediments. Deposited cells and particles are also resuspended following increases in streamflow. Fine particle retention influences biogeochemical processing of substrates and nutrients (C, N, P), while remobilization of pathogenic microbes during flood events presents a hazard to downstream uses such as water supplies and recreation. We are conducting studies to gain insights into the dynamics of fine particles and microbes in streams, with a campaign of experiments and modeling. The results improve understanding of fine sediment transport, carbon cycling, nutrient spiraling, and microbial hazards in streams. We developed a stochastic model to describe the transport and retention of fine particles and microbes in rivers that accounts for hyporheic exchange and transport through porewaters, reversible filtration within the streambed, and microbial inactivation in the water column and subsurface. This model framework is an advance over previous work in that it incorporates detailed transport and retention processes that are amenable to measurement. Solute, particle, and microbial transport were observed both locally within sediment and at the whole-stream scale. A multi-tracer whole-stream injection experiment compared the transport and retention of a conservative solute, fluorescent fine particles, and the fecal indicator bacterium Escherichia coli. Retention occurred within both the underlying sediment bed and stands of submerged macrophytes. The results demonstrate that the combination of local measurements, whole-stream tracer experiments, and advanced modeling
Monte Carlo simulations of spin transport in a strained nanoscale InGaAs field effect transistor
Thorpe, B.; Kalna, K.; Langbein, F. C.; Schirmer, S.
2017-12-01
Spin-based logic devices could operate at a very high speed with a very low energy consumption and hold significant promise for quantum information processing and metrology. We develop a spintronic device simulator by combining an in-house developed, experimentally verified, ensemble self-consistent Monte Carlo device simulator with spin transport based on a Bloch equation model and a spin-orbit interaction Hamiltonian accounting for Dresselhaus and Rashba couplings. It is employed to simulate a spin field effect transistor operating under externally applied voltages on a gate and a drain. In particular, we simulate electron spin transport in a 25 nm gate length In0.7Ga0.3As metal-oxide-semiconductor field-effect transistor with a CMOS compatible architecture. We observe a non-uniform decay of the net magnetization between the source and the gate and a magnetization recovery effect due to spin refocusing induced by a high electric field between the gate and the drain. We demonstrate a coherent control of the polarization vector of the drain current via the source-drain and gate voltages, and show that the magnetization of the drain current can be increased twofold by the strain induced into the channel.
Energy Technology Data Exchange (ETDEWEB)
Kling, Hanna; Doeoes, Kristofer (Dept. of Meteorology, Stockholm Univ., Stockholm (Sweden))
2007-12-15
In the safety assessment of a potential repository for spent nuclear fuel, it is important to assess the consequences of a hypothetical leak of radionuclides through the seabed and into a waterborne transport phase. Radionuclides adsorbed to sediment particles may be transported great distances through the processes of sedimentation and resuspension. This study investigates the transport patterns of sediment particles of two different sizes, released in the Forsmark and Laxemar area. The results show that the closed waters around Forsmark to a higher degree makes the particles stay in the area close to the release points
A Monte Carlo Code for Relativistic Radiation Transport Around Kerr Black Holes
Schnittman, Jeremy David; Krolik, Julian H.
2013-01-01
We present a new code for radiation transport around Kerr black holes, including arbitrary emission and absorption mechanisms, as well as electron scattering and polarization. The code is particularly useful for analyzing accretion flows made up of optically thick disks and optically thin coronae. We give a detailed description of the methods employed in the code and also present results from a number of numerical tests to assess its accuracy and convergence.
Transport map-accelerated Markov chain Monte Carlo for Bayesian parameter inference
Marzouk, Y.; Parno, M.
2014-12-01
We introduce a new framework for efficient posterior sampling in Bayesian inference, using a combination of optimal transport maps and the Metropolis-Hastings rule. The core idea is to use transport maps to transform typical Metropolis proposal mechanisms (e.g., random walks, Langevin methods, Hessian-preconditioned Langevin methods) into non-Gaussian proposal distributions that can more effectively explore the target density. Our approach adaptively constructs a lower triangular transport map—i.e., a Knothe-Rosenblatt re-arrangement—using information from previous MCMC states, via the solution of an optimization problem. Crucially, this optimization problem is convex regardless of the form of the target distribution. It is solved efficiently using Newton or quasi-Newton methods, but the formulation is such that these methods require no derivative information from the target probability distribution; the target distribution is instead represented via samples. Sequential updates using the alternating direction method of multipliers enable efficient and parallelizable adaptation of the map even for large numbers of samples. We show that this approach uses inexact or truncated maps to produce an adaptive MCMC algorithm that is ergodic for the exact target distribution. Numerical demonstrations on a range of parameter inference problems involving both ordinary and partial differential equations show multiple order-of-magnitude speedups over standard MCMC techniques, measured by the number of effectively independent samples produced per model evaluation and per unit of wallclock time.
International Nuclear Information System (INIS)
Yang, Y M; Bush, K; Han, B; Xing, L
2016-01-01
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) method that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high
Energy Technology Data Exchange (ETDEWEB)
Simeonov, Y; Penchev, P; Ringbaek, T Printz [University of Applied Sciences, Institute of Medical Physics and Radiation Protection, Giessen (Germany); Brons, S [Heidelberg Ion-Beam Therapy Center (HIT), Heidelberg (Germany); Weber, U [GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, Darmstadt (Germany); Zink, K [University of Applied Sciences, Institute of Medical Physics and Radiation Protection, Giessen (Germany); University Hospital Giessen-Marburg, Marburg (Germany)
2016-06-15
Purpose: Active raster scanning in particle therapy results in highly conformal dose distributions. Treatment time, however, is relatively high due to the large number of different iso-energy layers used. By using only one energy and the so called 3D range-modulator irradiation times of a few seconds only can be achieved, thus making delivery of homogeneous dose to moving targets (e.g. lung cancer) more reliable. Methods: A 3D range-modulator consisting of many pins with base area of 2.25 mm2 and different lengths was developed and manufactured with rapid prototyping technique. The form of the 3D range-modulator was optimised for a spherical target volume with 5 cm diameter placed at 25 cm in a water phantom. Monte Carlo simulations using the FLUKA package were carried out to evaluate the modulating effect of the 3D range-modulator and simulate the resulting dose distribution. The fine and complicated contour form of the 3D range-modulator was taken into account by a specially programmed user routine. Additionally FLUKA was extended with the capability of intensity modulated scanning. To verify the simulation results dose measurements were carried out at the Heidelberg Ion Therapy Center (HIT) with a 400.41 MeV 12C beam. Results: The high resolution measurements show that the 3D range-modulator is capable of producing homogeneous 3D conformal dose distributions, simultaneously reducing significantly irradiation time. Measured dose is in very good agreement with the previously conducted FLUKA simulations, where slight differences were traced back to minor manufacturing deviations from the perfect optimised form. Conclusion: Combined with the advantages of very short treatment time the 3D range-modulator could be an alternative to treat small to medium sized tumours (e.g. lung metastasis) with the same conformity as full raster-scanning treatment. Further simulations and measurements of more complex cases will be conducted to investigate the full potential of the 3D
International Nuclear Information System (INIS)
Vrotnyak, Ya.; Strugal'skij, Z.; Yablonskij, Z.
1976-01-01
The cascade curves and corresponding fluctuations of the numbers of shower particles are evaluated, using the Monte-Carlo calculation method, for showers initiated by gamma quanta of energies from 20 to 2000 MeV in liquid xenon. While making the calculation program, processes of electron-positron pair production, the compton effect, bremsstrahlung radiation of secondary photons by electrons and ionization losses of electrons were taken into consideration. The multiple Coulomb scattering and escape angles of particles were not taken into account in the model. The calculation program is performed in a variant providing calculations of shower parameters for any substance. The following values have been calculated: distribution of the number of particles with the depth, distribution of the energy of shower particles with the depth, mean number of particles with the depth, and the standard deviation. The calculation results for electrons are compared with experimental data. The calculation provides a correct position of maxima of particles and that of number of particles at the depth, shower tails exclusive
Radiation Transport Calculations and Simulations
Energy Technology Data Exchange (ETDEWEB)
Fasso, Alberto; /SLAC; Ferrari, A.; /CERN
2011-06-30
This article is an introduction to the Monte Carlo method as used in particle transport. After a description at an elementary level of the mathematical basis of the method, the Boltzmann equation and its physical meaning are presented, followed by Monte Carlo integration and random sampling, and by a general description of the main aspects and components of a typical Monte Carlo particle transport code. In particular, the most common biasing techniques are described, as well as the concepts of estimator and detector. After a discussion of the different types of errors, the issue of Quality Assurance is briefly considered.
Monte Carlo dose distributions for radiosurgery
International Nuclear Information System (INIS)
Perucha, M.; Leal, A.; Rincon, M.; Carrasco, E.
2001-01-01
The precision of Radiosurgery Treatment planning systems is limited by the approximations of their algorithms and by their dosimetrical input data. This fact is especially important in small fields. However, the Monte Carlo methods is an accurate alternative as it considers every aspect of particle transport. In this work an acoustic neurinoma is studied by comparing the dose distribution of both a planning system and Monte Carlo. Relative shifts have been measured and furthermore, Dose-Volume Histograms have been calculated for target and adjacent organs at risk. (orig.)
A review of the facile (FN) method in particle transport theory
International Nuclear Information System (INIS)
Garcia, R.D.M.
1986-02-01
The facile F N method for solving particle transport problems is reviewed. The fundamentals of the method are summarized, recent developments are discussed and several applications of the method are described in detail. (author) [pt
Influence of coal slurry particle composition on pipeline hydraulic transportation behavior
Li-an, Zhao; Ronghuan, Cai; Tieli, Wang
2018-02-01
Acting as a new type of energy transportation mode, the coal pipeline hydraulic transmission can reduce the energy transportation cost and the fly ash pollution of the conventional coal transportation. In this study, the effect of average velocity, particle size and pumping time on particle composition of coal particles during hydraulic conveying was investigated by ring tube test. Meanwhile, the effects of particle composition change on slurry viscosity, transmission resistance and critical sedimentation velocity were studied based on the experimental data. The experimental and theoretical analysis indicate that the alter of slurry particle composition can lead to the change of viscosity, resistance and critical velocity of slurry. Moreover, based on the previous studies, the critical velocity calculation model of coal slurry is proposed.
International Nuclear Information System (INIS)
Arter, W.; Loughlin, M.J.
2009-01-01
Accurate calculation of the neutron transport through the shielding of the IFMIF test cell, defined by CAD, is a difficult task for several reasons. The ability of the powerful deterministic radiation transport code Attila, to do this rapidly and reliably has been studied. Three models of increasing geometrical complexity were produced from the CAD using the CADfix software. A fourth model was produced to represent transport within the cell. The work also involved the conversion of the Vitenea-IEF database for high energy neutrons into a format usable by Attila, and the conversion of a particle source specified in MCNP wssaformat to a form usable by Attila. The final model encompassed the entire test cell environment, with only minor modifications. On a state-of-the-art PC, Attila took approximately 3 h to perform the calculations, as a consequence of a careful mesh 'layering'. The results strongly suggest that Attila will be a valuable tool for modelling radiation transport in IFMIF, and for similar problems
Johnson, Daniel; Chen, Yong; Ahmad, Salahuddin
2015-01-01
The factors influencing carbon ion therapy can be predicted from accurate knowledge about the production of secondary particles from the interaction of carbon ions in water/tissue-like materials, and subsequently the interaction of the secondary particles in the same materials. The secondary particles may have linear energy transfer (LET) values that potentially increase the relative biological effectiveness of the beam. Our primary objective in this study was to classify and quantify the secondary particles produced, their dose averaged LETs, and their dose contributions in the absorbing material. A 1 mm diameter carbon ion pencil beam with energies per nucleon of 155, 262, and 369 MeV was used in a geometry and tracking 4 Monte Carlo simulation to interact in a 27 L water phantom containing 3000 rectangular detector voxels. The dose-averaged LET and the dose contributions of primary and secondary particles were calculated from the simulation. The results of the simulations show that the secondary particles that contributed a major dose component had LETs 600 keV/µm contributed only <0.3% of the dose.
Directory of Open Access Journals (Sweden)
Daniel Johnson
2015-01-01
Full Text Available The factors influencing carbon ion therapy can be predicted from accurate knowledge about the production of secondary particles from the interaction of carbon ions in water/tissue-like materials, and subsequently the interaction of the secondary particles in the same materials. The secondary particles may have linear energy transfer (LET values that potentially increase the relative biological effectiveness of the beam. Our primary objective in this study was to classify and quantify the secondary particles produced, their dose averaged LETs, and their dose contributions in the absorbing material. A 1 mm diameter carbon ion pencil beam with energies per nucleon of 155, 262, and 369 MeV was used in a geometry and tracking 4 Monte Carlo simulation to interact in a 27 L water phantom containing 3000 rectangular detector voxels. The dose-averaged LET and the dose contributions of primary and secondary particles were calculated from the simulation. The results of the simulations show that the secondary particles that contributed a major dose component had LETs 600 keV/µm contributed only <0.3% of the dose.
Control of alpha-particle transport by ion cyclotron resonance heating
International Nuclear Information System (INIS)
Chang, C.S.; Imre, K.; Weitzner, H.; Colestock, P.
1990-01-01
In this paper control of radial alpha-particle transport by using ion cyclotron range of frequency (ICRF) waves is investigated in a large-aspect-ratio tokamak geometry. Spatially inhomogeneous ICRF wave energy with properly selected frequencies and wave numbers can induce fast convective transports of alpha particles at the speed of order v α ∼ (P RF /n α ε 0 )ρ p , where R RF is the ICRF wave power density, n α is the alpha-particle density, ε 0 is the alpha-particle birth energy, and ρ p is the poloidal gyroradius of alpha particles at the birth energy. Application to International Thermonuclear Experimental Reactor (ITER) plasma is studied and possible antenna designs to control alpha-particle flux are discussed
Chen, Xingxin; Wu, Zhonghan; Cai, Qipeng; Cao, Wei
2018-04-01
It is well established that seismic waves traveling through porous media stimulate fluid flow and accelerate particle transport. However, the mechanism remains poorly understood. To quantify the coupling effect of hydrodynamic force, transportation distance, and ultrasonic stimulation on particle transport and fate in porous media, laboratory experiments were conducted using custom-built ultrasonic-controlled soil column equipment. Three column lengths (23 cm, 33 cm, and 43 cm) were selected to examine the influence of transportation distance. Transport experiments were performed with 0 W, 600 W, 1000 W, 1400 W, and 1800 W of applied ultrasound, and flow rates of 0.065 cm/s, 0.130 cm/s, and 0.195 cm/s, to establish the roles of ultrasonic stimulation and hydrodynamic force. The laboratory results suggest that whilst ultrasonic stimulation does inhibit suspended-particle deposition and accelerate deposited-particle release, both hydrodynamic force and transportation distance are the principal controlling factors. The median particle diameter for the peak concentration was approximately 50% of that retained in the soil column. Simulated particle-breakthrough curves using extended traditional filtration theory effectively described the experimental curves, particularly the curves that exhibited a higher tailing concentration.
DANTSYS: A diffusion accelerated neutral particle transport code system
Energy Technology Data Exchange (ETDEWEB)
Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O`Dell, R.D.; Walters, W.F.
1995-06-01
The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZ{Theta} symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing.
DANTSYS: A diffusion accelerated neutral particle transport code system
International Nuclear Information System (INIS)
Alcouffe, R.E.; Baker, R.S.; Brinkley, F.W.; Marr, D.R.; O'Dell, R.D.; Walters, W.F.
1995-06-01
The DANTSYS code package includes the following transport codes: ONEDANT, TWODANT, TWODANT/GQ, TWOHEX, and THREEDANT. The DANTSYS code package is a modular computer program package designed to solve the time-independent, multigroup discrete ordinates form of the boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, one or more Solver Modules, and the Edit Module, respectively. The Input and Edit Modules are very general in nature and are common to all the Solver Modules. The ONEDANT Solver Module contains a one-dimensional (slab, cylinder, and sphere), time-independent transport equation solver using the standard diamond-differencing method for space/angle discretization. Also included in the package are solver Modules named TWODANT, TWODANT/GQ, THREEDANT, and TWOHEX. The TWODANT Solver Module solves the time-independent two-dimensional transport equation using the diamond-differencing method for space/angle discretization. The authors have also introduced an adaptive weighted diamond differencing (AWDD) method for the spatial and angular discretization into TWODANT as an option. The TWOHEX Solver Module solves the time-independent two-dimensional transport equation on an equilateral triangle spatial mesh. The THREEDANT Solver Module solves the time independent, three-dimensional transport equation for XYZ and RZΘ symmetries using both diamond differencing with set-to-zero fixup and the AWDD method. The TWODANT/GQ Solver Module solves the 2-D transport equation in XY and RZ symmetries using a spatial mesh of arbitrary quadrilaterals. The spatial differencing method is based upon the diamond differencing method with set-to-zero fixup with changes to accommodate the generalized spatial meshing
International Nuclear Information System (INIS)
Adorno, Dominique Persano; Pizzolato, Nicola; Fazio, Claudio
2015-01-01
Within the context of higher education for science or engineering undergraduates, we present an inquiry-driven learning path aimed at developing a more meaningful conceptual understanding of the electron dynamics in semiconductors in the presence of applied electric fields. The electron transport in a nondegenerate n-type indium phosphide bulk semiconductor is modelled using a multivalley Monte Carlo approach. The main characteristics of the electron dynamics are explored under different values of the driving electric field, lattice temperature and impurity density. Simulation results are presented by following a question-driven path of exploration, starting from the validation of the model and moving up to reasoned inquiries about the observed characteristics of electron dynamics. Our inquiry-driven learning path, based on numerical simulations, represents a viable example of how to integrate a traditional lecture-based teaching approach with effective learning strategies, providing science or engineering undergraduates with practical opportunities to enhance their comprehension of the physics governing the electron dynamics in semiconductors. Finally, we present a general discussion about the advantages and disadvantages of using an inquiry-based teaching approach within a learning environment based on semiconductor simulations. (paper)
Energetic Particle Transport in Compact Quasi-axisymmetric Stellarators
International Nuclear Information System (INIS)
Redi, M.H.; Mynick, H.E.; Suewattana, M.; White, R.B.; Zarnstorff, M.C.; Isaev, M.Yu.; Mikhailov, M.I.; Subbotin, A.A.
1999-01-01
Hamiltonian coordinate, guiding-center code calculations of the confinement of suprathermal ions in quasi-axisymmetric stellarator (QAS) designs have been carried out to evaluate the attractiveness of compact configurations which are optimized for ballooning stability. A new stellarator particle-following code is used to predict ion loss rates and particle confinement for thermal and neutral beam ions in a small experiment with R = 145 cm, B = 1-2 T and for alpha particles in a reactor-size device. In contrast to tokamaks, it is found that high edge poloidal flux has limited value in improving ion confinement in QAS, since collisional pitch-angle scattering drives ions into ripple wells and stochastic field regions, where they are quickly lost. The necessity for reduced stellarator ripple fields is emphasized. The high neutral beam ion loss predicted for these configurations suggests that more interesting physics could be explored with an experiment of less constrained size and magnetic field geometry
Nuclear fuel particles in the environment - characteristics, atmospheric transport and skin doses
International Nuclear Information System (INIS)
Poellaenen, R.
2002-05-01
In the present thesis, nuclear fuel particles are studied from the perspective of their characteristics, atmospheric transport and possible skin doses. These particles, often referred to as 'hot' particles, can be released into the environment, as has happened in past years, through human activities, incidents and accidents, such as the Chernobyl nuclear power plant accident in 1986. Nuclear fuel particles with a diameter of tens of micrometers, referred to here as large particles, may be hundreds of kilobecquerels in activity and even an individual particle may present a quantifiable health hazard. The detection of individual nuclear fuel particles in the environment, their isolation for subsequent analysis and their characterisation are complicated and require well-designed sampling and tailored analytical methods. In the present study, the need to develop particle analysis methods is highlighted. It is shown that complementary analytical techniques are necessary for proper characterisation of the particles. Methods routinely used for homogeneous samples may produce erroneous results if they are carelessly applied to radioactive particles. Large nuclear fuel particles are transported differently in the atmosphere compared with small particles or gaseous species. Thus, the trajectories of gaseous species are not necessarily appropriate for calculating the areas that may receive large particle fallout. A simplified model and a more advanced model based on the data on real weather conditions were applied in the case of the Chernobyl accident to calculate the transport of the particles of different sizes. The models were appropriate in characterising general transport properties but were not able to properly predict the transport of the particles with an aerodynamic diameter of tens of micrometers, detected at distances of hundreds of kilometres from the source, using only the current knowledge of the source term. Either the effective release height has been higher
Nuclear fuel particles in the environment - characteristics, atmospheric transport and skin doses
Energy Technology Data Exchange (ETDEWEB)
Poellaenen, R
2002-05-01
In the present thesis, nuclear fuel particles are studied from the perspective of their characteristics, atmospheric transport and possible skin doses. These particles, often referred to as 'hot' particles, can be released into the environment, as has happened in past years, through human activities, incidents and accidents, such as the Chernobyl nuclear power plant accident in 1986. Nuclear fuel particles with a diameter of tens of micrometers, referred to here as large particles, may be hundreds of kilobecquerels in activity and even an individual particle may present a quantifiable health hazard. The detection of individual nuclear fuel particles in the environment, their isolation for subsequent analysis and their characterisation are complicated and require well-designed sampling and tailored analytical methods. In the present study, the need to develop particle analysis methods is highlighted. It is shown that complementary analytical techniques are necessary for proper characterisation of the particles. Methods routinely used for homogeneous samples may produce erroneous results if they are carelessly applied to radioactive particles. Large nuclear fuel particles are transported differently in the atmosphere compared with small particles or gaseous species. Thus, the trajectories of gaseous species are not necessarily appropriate for calculating the areas that may receive large particle fallout. A simplified model and a more advanced model based on the data on real weather conditions were applied in the case of the Chernobyl accident to calculate the transport of the particles of different sizes. The models were appropriate in characterising general transport properties but were not able to properly predict the transport of the particles with an aerodynamic diameter of tens of micrometers, detected at distances of hundreds of kilometres from the source, using only the current knowledge of the source term. Either the effective release height has
Cai, Li; Tong, Meiping; Wang, Xueting; Kim, Hyunjung
2014-07-01
This study investigated the influence of two representative suspended clay particles, bentonite and kaolinite, on the transport of titanium dioxide nanoparticles (nTiO2) in saturated quartz sand in both NaCl (1 and 10 mM ionic strength) and CaCl2 solutions (0.1 and 1 mM ionic strength) at pH 7. The breakthrough curves of nTiO2 with bentonite or kaolinite were higher than those without the presence of clay particles in NaCl solutions, indicating that both types of clay particles increased nTiO2 transport in NaCl solutions. Moreover, the enhancement of nTiO2 transport was more significant when bentonite was present in nTiO2 suspensions relative to kaolinite. Similar to NaCl solutions, in CaCl2 solutions, the breakthrough curves of nTiO2 with bentonite were also higher than those without clay particles, while the breakthrough curves of nTiO2 with kaolinite were lower than those without clay particles. Clearly, in CaCl2 solutions, the presence of bentonite in suspensions increased nTiO2 transport, whereas, kaolinite decreased nTiO2 transport in quartz sand. The attachment of nTiO2 onto clay particles (both bentonite and kaolinite) were observed under all experimental conditions. The increased transport of nTiO2 in most experimental conditions (except for kaolinite in CaCl2 solutions) was attributed mainly to the clay-facilitated nTiO2 transport. The straining of larger nTiO2-kaolinite clusters yet contributed to the decreased transport (enhanced retention) of nTiO2 in divalent CaCl2 solutions when kaolinite particles were copresent in suspensions.
Influence of tube and particle diameter on heat transport in packed beds
Borkink, J.G.H.; Borkink, J.G.H.; Westerterp, K.R.
1992-01-01
Influence of the tube and particle diameter and shape, as well as their ratio, on the radial heat transport in packed beds has been studied. Heat transport experiments were performed with four different packings in three wall-cooled tubes, which differed in inner diameter only. Experimental values
International Nuclear Information System (INIS)
Torok, J.; Buckley, L.P.; Woods, B.L.
1989-01-01
Laboratory-scale lysimeter experiments were performed with simulated waste forms placed in candidate buffer materials which have been chosen for a low-level radioactive waste repository. Radionuclide releases into the effluent water and radionuclide capture by the buffer material were determined. The results could not be explained by traditional solution transport mechanisms, and transport by particles released from the waste form and/or transport by buffer particles were suspected as the dominant mechanism for radionuclide release from the lysimeters. To elucidate the relative contribution of particle and solution transport, the waste forms were replaced by a wafer of neutron-activated buffer soaked with selected soluble isotopes. Particle transport was determined by the movement of gamma-emitting neutron-activation products through the lysimeter. Solution transport was quantified by comparing the migration of soluble radionuclides relative to the transport of neutron activation products. The new approach for monitoring radionuclide migration in soil is presented. It facilitates the determination of most of the fundamental coefficients required to model the transport process
Observation of a spontaneous particle-transport barrier in the HL-2A tokamak.
Xiao, W W; Zou, X L; Ding, X T; Yao, L H; Feng, B B; Song, X M; Song, S D; Zhou, Y; Liu, Z T; Yuan, B S; Sun, H J; Ji, X Q; Gao, Y D; Li, Y G; Yan, L W; Yang, Q W; Liu, Yi; Dong, J Q; Duan, X R; Liu, Yong; Pan, C H
2010-05-28
Using the profile analysis, the density perturbation transport analysis, and the Doppler reflectometry measurement, for the first time a spontaneous and steady-state particle-transport barrier has been evidenced in the Ohmic plasmas in the HL-2A tokamak with no externally applied momentum or particle input except the gas puffing. A threshold in density has been found for the observation of the barrier. The particle diffusivity is well-like, and the convection is found to be inward outside the well and outward inside the well. The formation of the barrier coincides with the transition between the trapped electron mode and the ion temperature gradient driven mode.
Mechanism for Particle Transport and Size Sorting via Low-Frequency Vibrations
Sherrit, Stewart; Scott, James S.; Bar-Cohen, Yoseph; Badescu, Mircea; Bao, Xiaoqi
2010-01-01
There is a need for effective sample handling tools to deliver and sort particles for analytical instruments that are planned for use in future NASA missions. Specifically, a need exists for a compact mechanism that allows transporting and sieving particle sizes of powdered cuttings and soil grains that may be acquired by sampling tools such as a robotic scoop or drill. The required tool needs to be low mass and compact to operate from such platforms as a lander or rover. This technology also would be applicable to sample handling when transporting samples to analyzers and sorting particles by size.
Cai, Zhengqing; Fu, Jie; Liu, Wen; Fu, Kunming; O'Reilly, S E; Zhao, Dongye
2017-01-15
This work investigated effects of three model oil dispersants (Corexit EC9527A, Corexit EC9500A and SPC1000) on settling of fine sediment particles and particle-facilitated distribution and transport of oil components in sediment-seawater systems. All three dispersants enhanced settling of sediment particles. The nonionic surfactants (Tween 80 and Tween 85) play key roles in promoting particle aggregation. Yet, the effects varied with environmental factors (pH, salinity, DOM, and temperature). Strongest dispersant effect was observed at neutral or alkaline pH and in salinity range of 0-3.5wt%. The presence of water accommodated oil and dispersed oil accelerated settling of the particles. Total petroleum hydrocarbons in the sediment phase were increased from 6.9% to 90.1% in the presence of Corexit EC9527A, and from 11.4% to 86.7% for PAHs. The information is useful for understanding roles of oil dispersants in formation of oil-sediment aggregates and in sediment-facilitated transport of oil and PAHs in marine eco-systems. Copyright © 2016 Elsevier Ltd. All rights reserved.
Dunn, William L
2012-01-01
Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble
The Random Ray Method for neutral particle transport
Energy Technology Data Exchange (ETDEWEB)
Tramm, John R., E-mail: jtramm@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Argonne National Laboratory, Mathematics and Computer Science Department 9700 S Cass Ave, Argonne, IL 60439 (United States); Smith, Kord S., E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegela@mcs.anl.gov [Argonne National Laboratory, Mathematics and Computer Science Department 9700 S Cass Ave, Argonne, IL 60439 (United States)
2017-08-01
A new approach to solving partial differential equations (PDEs) based on the method of characteristics (MOC) is presented. The Random Ray Method (TRRM) uses a stochastic rather than deterministic discretization of characteristic tracks to integrate the phase space of a problem. TRRM is potentially applicable in a number of transport simulation fields where long characteristic methods are used, such as neutron transport and gamma ray transport in reactor physics as well as radiative transfer in astrophysics. In this study, TRRM is developed and then tested on a series of exemplar reactor physics benchmark problems. The results show extreme improvements in memory efficiency compared to deterministic MOC methods, while also reducing algorithmic complexity, allowing for a sparser computational grid to be used while maintaining accuracy.
Third-order TRANSPORT: A computer program for designing charged particle beam transport systems
International Nuclear Information System (INIS)
Carey, D.C.; Brown, K.L.; Rothacker, F.
1995-05-01
TRANSPORT has been in existence in various evolutionary versions since 1963. The present version of TRANSPORT is a first-, second-, and third-order matrix multiplication computer program intended for the design of static-magnetic beam transport systems. This report discusses the following topics on TRANSPORT: Mathematical formulation of TRANSPORT; input format for TRANSPORT; summaries of TRANSPORT elements; preliminary specifications; description of the beam; physical elements; other transformations; assembling beam lines; operations; variation of parameters for fitting; and available constraints -- the FIT command
Third-order TRANSPORT: A computer program for designing charged particle beam transport systems
Energy Technology Data Exchange (ETDEWEB)
Carey, D.C. [Fermi National Accelerator Lab., Batavia, IL (United States); Brown, K.L.; Rothacker, F. [Stanford Linear Accelerator Center, Menlo Park, CA (United States)
1995-05-01
TRANSPORT has been in existence in various evolutionary versions since 1963. The present version of TRANSPORT is a first-, second-, and third-order matrix multiplication computer program intended for the design of static-magnetic beam transport systems. This report discusses the following topics on TRANSPORT: Mathematical formulation of TRANSPORT; input format for TRANSPORT; summaries of TRANSPORT elements; preliminary specifications; description of the beam; physical elements; other transformations; assembling beam lines; operations; variation of parameters for fitting; and available constraints -- the FIT command.
Collet, Pierre; Eckmann, Jean-Pierre; Mejía-Monasterio, Carlos
2009-07-01
We study heat transport in a one-dimensional chain of a finite number N of identical cells, coupled at its boundaries to stochastic particle reservoirs. At the center of each cell, tracer particles collide with fixed scatterers, exchanging momentum. In a recent paper (Collet and Eckmann in Commun. Math. Phys. 287:1015, 2009), a spatially continuous version of this model was derived in a scaling regime where the scattering probability of the tracers is γ˜1/ N, corresponding to the Grad limit. A Boltzmann-like equation describing the transport of heat was obtained. In this paper, we show numerically that the Boltzmann description obtained in Collet and Eckmann (Commun. Math. Phys. 287:1015, 2009) is indeed a bona fide limit of the particle model. Furthermore, we study the heat transport of the model when the scattering probability is 1, corresponding to deterministic dynamics. Thought as a lattice model in which particles jump between different scatterers the motion is persistent, with a persistence probability determined by the mass ratio among particles and scatterers, and a waiting time probability distribution with algebraic tails. We find that the heat and particle currents scale slower than 1/ N, implying that this model exhibits anomalous heat and particle transport.
Energy Technology Data Exchange (ETDEWEB)
Dritselis, C.D., E-mail: dritseli@mie.uth.g [Department of Mechanical Engineering, University of Thessaly, Athens Avenue, 38334 Volos (Greece); Sarris, I.E.; Fidaros, D.K.; Vlachos, N.S. [Department of Mechanical Engineering, University of Thessaly, Athens Avenue, 38334 Volos (Greece)
2011-04-15
The effect of Lorentz force on particle transport and deposition is studied by using direct numerical simulation of turbulent channel flow of electrically conducting fluids combined with discrete particle simulation of the trajectories of uncharged, spherical particles. The magnetohydrodynamic equations for fluid flows at low magnetic Reynolds numbers are adopted. The particle motion is determined by the drag, added mass, and pressure gradient forces. Results are obtained for flows with particle ensembles of various densities and diameters in the presence of streamwise, wall-normal or spanwise magnetic fields. It is found that the particle dispersion in the wall-normal and spanwise directions is decreased due to the changes of the underlying fluid turbulence by the Lorentz force, while it is increased in the streamwise direction. The particle accumulation in the near-wall region is diminished in the magnetohydrodynamic flows. In addition, the tendency of small inertia particles to concentrate preferentially in the low-speed streaks near the walls is strengthened with increasing Hartmann number. The particle transport by turbophoretic drift and turbulent diffusion is damped by the magnetic field and, consequently, particle deposition is reduced.
Energy Technology Data Exchange (ETDEWEB)
Okada, Kazuya [School of Akita Prefectural University, Yurihonjo (Japan); Satoh, Akira, E-mail: asatoh@akita-pu.ac.jp [Department of Machine Intelligence and System Engineering, Akita Prefectural University, Yurihonjo (Japan)
2017-09-01
Highlights: • Monte Carlo simulations have been employed for the aggregate structures. • Brownian dynamics simulations have been employed for the magneto-rheology. • Even a weak shear flow induces a significant regime change in the aggregates. • A strong external magnetic field drastically changes the aggregates. • The dependence of the viscosity on these factors is governed in a complex manner. - Abstract: In the present study, we address a suspension composed ferromagnetic rod-like particles to elucidate a regime change in the aggregate structures and the magneto-rheological characteristics. Monte Carlo simulations have been employed for investigating the aggregate structures in thermodynamic equilibrium, and Brownian dynamics simulations for magneto-rheological features in a simple shear flow. The main results obtained here are summarized as follows. For the case of thermodynamic equilibrium, the rod-like particles aggregate to form thick chain-like clusters and the neighboring clusters incline in opposite directions. If the external magnetic field is increased, the thick chain-like clusters in the magnetic field direction grow thicker by adsorbing the neighboring clusters that incline in the opposite direction. Hence, a significant phase change in the particle aggregates is not induced by an increase in the magnetic field strength. For the case of a simple shear flow, even a weak shear flow induces a significant regime change from the thick chain-like clusters of thermodynamic equilibrium into wall-like aggregates composed of short raft-like clusters. A strong external magnetic field drastically changes these aggregates into wall-like aggregates composed of thick chain-like clusters rather than the short raft-like clusters. The internal structure of these aggregates is not strongly influenced by a shear flow, and the formation of the short raft-like clusters is maintained inside the aggregates. The main contribution to the net viscosity is the
Methane Bubbles Transport Particles From Contaminated Sediment to a Lake Surface
Delwiche, K.; Hemond, H.
2017-12-01
Methane bubbling from aquatic sediments has long been known to transport carbon to the atmosphere, but new evidence presented here suggests that methane bubbles also transport particulate matter to a lake surface. This transport pathway is of particular importance in lakes with contaminated sediments, as bubble transport could increase human exposure to toxic metals. The Upper Mystic Lake in Arlington, MA has a documented history of methane bubbling and sediment contamination by arsenic and other heavy metals, and we have conducted laboratory and field studies demonstrating that methane bubbles are capable of transporting sediment particles over depths as great as 15 m in Upper Mystic Lake. Methane bubble traps were used in-situ to capture particles adhered to bubble interfaces, and to relate particle mass transport to bubble flux. Laboratory studies were conducted in a custom-made 15 m tall water column to quantify the relationship between water column height and the mass of particulate transport. We then couple this particle transport data with historical estimates of ebullition from Upper Mystic Lake to quantify the significance of bubble-mediated particle transport to heavy metal cycling within the lake. Results suggest that methane bubbles can represent a significant pathway for contaminated sediment to reach surface waters even in relatively deep water bodies. Given the frequent co-occurrence of contaminated sediments and high bubble flux rates, and the potential for human exposure to heavy metals, it will be critical to study the significance of this transport pathway for a range of sediment and contaminant types.
International Nuclear Information System (INIS)
Shi Feng; Wang Dezhen; Ren Chunsheng
2008-01-01
Atmospheric pressure discharge nonequilibrium plasmas have been applied to plasma processing with modern technology. Simulations of discharge in pure Ar and pure He gases at one atmospheric pressure by a high voltage trapezoidal nanosecond pulse have been performed using a one-dimensional particle-in-cell Monte Carlo collision (PIC-MCC) model coupled with a renormalization and weighting procedure (mapping algorithm). Numerical results show that the characteristics of discharge in both inert gases are very similar. There exist the effects of local reverse field and double-peak distributions of charged particles' density. The electron and ion energy distribution functions are also observed, and the discharge is concluded in the view of ionization avalanche in number. Furthermore, the independence of total current density is a function of time, but not of position
Liu, Zhongqiu; Li, Linmin; Li, Baokuan; Jiang, Maofa
2014-07-01
The current study developed a coupled computational model to simulate the transient fluid flow, solidification, and particle transport processes in a slab continuous-casting mold. Transient flow of molten steel in the mold is calculated using the large eddy simulation. An enthalpy-porosity approach is used for the analysis of solidification processes. The transport of bubble and non-metallic inclusion inside the liquid pool is calculated using the Lagrangian approach based on the transient flow field. A criterion of particle entrapment in the solidified shell is developed using the user-defined functions of FLUENT software (ANSYS, Inc., Canonsburg, PA). The predicted results of this model are compared with the measurements of the ultrasonic testing of the rolled steel plates and the water model experiments. The transient asymmetrical flow pattern inside the liquid pool exhibits quite satisfactory agreement with the corresponding measurements. The predicted complex instantaneous velocity field is composed of various small recirculation zones and multiple vortices. The transport of particles inside the liquid pool and the entrapment of particles in the solidified shell are not symmetric. The Magnus force can reduce the entrapment ratio of particles in the solidified shell, especially for smaller particles, but the effect is not obvious. The Marangoni force can play an important role in controlling the motion of particles, which increases the entrapment ratio of particles in the solidified shell obviously.
Surface transport and stable trapping of particles and cells by an optical waveguide loop.
Hellesø, Olav Gaute; Løvhaugen, Pål; Subramanian, Ananth Z; Wilkinson, James S; Ahluwalia, Balpreet Singh
2012-09-21
Waveguide trapping has emerged as a useful technique for parallel and planar transport of particles and biological cells and can be integrated with lab-on-a-chip applications. However, particles trapped on waveguides are continuously propelled forward along the surface of the waveguide. This limits the practical usability of the waveguide trapping technique with other functions (e.g. analysis, imaging) that require particles to be stationary during diagnosis. In this paper, an optical waveguide loop with an intentional gap at the centre is proposed to hold propelled particles and cells. The waveguide acts as a conveyor belt to transport and deliver the particles/cells towards the gap. At the gap, the diverging light fields hold the particles at a fixed position. The proposed waveguide design is numerically studied and experimentally implemented. The optical forces on the particle at the gap are calculated using the finite element method. Experimentally, the method is used to transport and trap micro-particles and red blood cells at the gap with varying separations. The waveguides are only 180 nm thick and thus could be integrated with other functions on the chip, e.g. microfluidics or optical detection, to make an on-chip system for single cell analysis and to study the interaction between cells.
Alhakeem, Eyad; Zavgorodni, Sergei
2018-01-01
The purpose of this study was to evaluate the latent variance (LV) of Varian TrueBeam photon phase-space files (PSF) for open 10 × 10 cm2 and small stereotactic fields and estimate the number of phase spaces required to be summed up in order to maintain sub-percent LV in Monte Carlo (MC) dose calculations. BEAMnrc/DOSXYZnrc software was used to transport particles from Varian phase-space files (PSFA) through the secondary collimators. Transported particles were scored into another phase-space located under the jaws (PSFB), or transported further through the cone collimators and scored straight below, forming PSFC. Phase-space files (PSFB) were scored for 6 MV-FFF, 6 MV, 10 MV-FFF, 10 MV and 15 MV beams with 10 × 10 cm2 field size, and PSFC were scored for 6 MV beam under circular cones of 0.13, 0.25, 0.35, and 1 cm diameter. Both PSFB and PSFC were transported into a water phantom with particle recycling number ranging from 10 to 1000. For 10 × 10 cm2 fields 0.5 × 0.5 × 0.5 cm3 voxels were used to score the dose, whereas the dose was scored in 0.1 × 0.1 × 0.5 cm3 voxels for beams collimated with small cones. In addition, for small 0.25 cm diameter cone-collimated 6 MV beam, phantom voxel size varied as 0.02 × 0.02 × 0.5 cm3, 0.05 × 0.05 × 0.5 cm3 and 0.1 × 0.1 × 0.5 cm3. Dose variances were scored in all cases and LV evaluated as per Sempau et al. For the 10 × 10 cm2 fields calculated LVs were greatest at the phantom surface and decreased with depth until they reached a plateau at 5 cm depth. LVs were found to be 0.54%, 0.96%, 0.35%, 0.69% and 0.57% for the 6 MV-FFF, 6 MV, 10 MV-FFF, 10 MV and 15 MV energies, respectively at the depth of 10 cm. For the 6 MV phase-space collimated with cones of 0.13, 0.25, 0.35, 1.0 cm diameter, the LVs calculated at 1.5 cm depth were 75.6%, 25.4%, 17
Transport and fate of microplastic particles in wastewater treatment plants.
Carr, Steve A; Liu, Jin; Tesoro, Arnold G
2016-03-15
Municipal wastewater treatment plants (WWTPs) are frequently suspected as significant point sources or conduits of microplastics to the environment. To directly investigate these suspicions, effluent discharges from seven tertiary plants and one secondary plant in Southern California were studied. The study also looked at influent loads, particle size/type, conveyance, and removal at these wastewater treatment facilities. Over 0.189 million liters of effluent at each of the seven tertiary plants were filtered using an assembled stack of sieves with mesh sizes between 400 and 45 μm. Additionally, the surface of 28.4 million liters of final effluent at three tertiary plants was skimmed using a 125 μm filtering assembly. The results suggest that tertiary effluent is not a significant source of microplastics and that these plastic pollutants are effectively removed during the skimming and settling treatment processes. However, at a downstream secondary plant, an average of one micro-particle in every 1.14 thousand liters of final effluent was counted. The majority of microplastics identified in this study had a profile (color, shape, and size) similar to the blue polyethylene particles present in toothpaste formulations. Existing treatment processes were determined to be very effective for removal of microplastic contaminants entering typical municipal WWTPs. Published by Elsevier Ltd.
Suspended particle transport through constriction channel with Brownian motion
Hanasaki, Itsuo; Walther, Jens H.
2017-08-01
It is well known that translocation events of a polymer or rod through pores or narrower parts of micro- and nanochannels have a stochastic nature due to the Brownian motion. However, it is not clear whether the objects of interest need to have a larger size than the entrance to exhibit the deviation from the dynamics of the surrounding fluid. We show by numerical analysis that the particle injection into the narrower part of the channel is affected by thermal fluctuation, where the particles have spherical symmetry and are smaller than the height of the constriction. The Péclet number (Pe) is the order parameter that governs the phenomena, which clarifies the spatio-temporal significance of Brownian motion compared to hydrodynamics. Furthermore, we find that there exists an optimal condition of Pe to attain the highest flow rate of particles relative to the dispersant fluid flow. Our finding is important in science and technology from nanopore DNA sequencers and lab-on-a-chip devices to filtration by porous materials and chromatography.
International Nuclear Information System (INIS)
Gereben, O; Pusztai, L; McGreevy, R L
2010-01-01
A new reverse Monte Carlo (RMC) method has been developed for creating three-dimensional structures in agreement with small angle scattering data. Extensive tests, using computer generated quasi-experimental data for aggregation processes via constrained RMC and Langevin molecular dynamics, were performed. The software is capable of fitting several consecutive time frames of scattering data, and movie-like visualization of the structure (and its evolution) either during or after the simulation is also possible.
Modeling of reactive transport with particle tracking and kernel density estimators
Rahbaralam, Maryam
2018-01-01
Random walk particle tracking methods are a computationally efficient family of methods to solve reactive transport problems. While the number of particles in most realistic applications is in the order of 10^6 - 10^9, the number of reactive molecules even in diluted systems might be in the order of fractions of the Avogadro number. Thus, each particle actually represents a group of potentially reactive molecules. The use of a low number of particles may result not only in loss of accuracy, b...
Atmospheric fate and transport of fine volcanic ash: Does particle shape matter?
White, C. M.; Allard, M. P.; Klewicki, J.; Proussevitch, A. A.; Mulukutla, G.; Genareau, K.; Sahagian, D. L.
2013-12-01
Volcanic ash presents hazards to infrastructure, agriculture, and human and animal health. In particular, given the economic importance of intercontinental aviation, understanding how long ash is suspended in the atmosphere, and how far it is transported has taken on greater importance. Airborne ash abrades the exteriors of aircraft, enters modern jet engines and melts while coating interior engine parts causing damage and potential failure. The time fine ash stays in the atmosphere depends on its terminal velocity. Existing models of ash terminal velocities are based on smooth, quasi-spherical particles characterized by Stokes velocity. Ash particles, however, violate the various assumptions upon which Stokes flow and associated models are based. Ash particles are non-spherical and can have complex surface and internal structure. This suggests that particle shape may be one reason that models fail to accurately predict removal rates of fine particles from volcanic ash clouds. The present research seeks to better parameterize predictive models for ash particle terminal velocities, diffusivity, and dispersion in the atmospheric boundary layer. The fundamental hypothesis being tested is that particle shape irreducibly impacts the fate and transport properties of fine volcanic ash. Pilot studies, incorporating modeling and experiments, are being conducted to test this hypothesis. Specifically, a statistical model has been developed that can account for actual volcanic ash size distributions, complex ash particle geometry, and geometry variability. Experimental results are used to systematically validate and improve the model. The experiments are being conducted at the Flow Physics Facility (FPF) at UNH. Terminal velocities and dispersion properties of fine ash are characterized using still air drop experiments in an unconstrained open space using a homogenized mix of source particles. Dispersion and sedimentation dynamics are quantified using particle image
Energy Technology Data Exchange (ETDEWEB)
Pazianotto, Mauricio Tizziani; Goncalez, Odair Lelis; Federico, Claudio Antonio [Centro Tecnico Aeroespacial (IEAv/CTA), Sao Jose dos Campos, SP (Brazil). Inst. de Estudos Avancados; Carlson, Brett Vern [Centro Tecnico Aeroespacial (ITA/CTA), Sao Jose dos Campos, SP (Brazil). Inst. Tecnologico de Aeronautica
2010-07-01
Full text: The Institute for Advanced Studies (IEAv) is developing activities to study the dose levels of ionizing radiation from cosmic rays (CR) received by aircraft crews, sensitive equipment (on-board computers, for example) and embedded electronics in Brazilian airspace. Neutrons generated by the interaction of CR with the atmosphere are the dominant particles in the dose accumulation in electronic circuits and aircraft crews at flight altitude. Their production has a very broad energy spectrum, ranging from thermal neutrons (0.025eV ) to neutrons of several hundreds of MeV , making their detection a very difficult process. To observe the temporal variation in flow during the measurements, a detector of the Long Counter (LC) type is being used. This detector is designed to measure the one-way flow of neutrons with constant response over a wide energy range (thermal to 20 MeV ). However, to measure cosmic rays, the flow of which is non-directional, the dependence of the response on the angle of incidence, as well as energy, should be properly investigated. The objective of this study is to assess the angular response of the neutron detector (Long Counter) using the code MCNP5 (Monte Carlo N-Particle) and to compare it with the experimental data previously obtained with a {sup 241}Am-Be source at a distance of 1.66 m from the geometric center of the detector, varying the angle of incidence from 00 to 3600 in intervals of 150. The simulation was performed by modeling in detail the structure and materials of the LC, as well as the experimental arrangement for irradiation. The results of the simulation present reasonable agreement with the experimental data. This agreement shows that the modeling of the geometry of the source-detector system is adequate. The next step is to develop a model of neutron detection for the higher energy present in cosmic radiation fields, for which the experimental calibration is not so easily achievable. (author)
Monte Carlo Methods in ICF (LIRPP Vol. 13)
Zimmerman, George B.
2016-10-01
Monte Carlo methods appropriate to simulate the transport of x-rays, neutrons, ions and electrons in Inertial Confinement Fusion targets are described and analyzed. The Implicit Monte Carlo method of x-ray transport handles symmetry within indirect drive ICF hohlraums well, but can be improved SOX in efficiency by angular biasing the x-rays towards the fuel capsule. Accurate simulation of thermonuclear burn and burn diagnostics involves detailed particle source spectra, charged particle ranges, inflight reaction kinematics, corrections for bulk and thermal Doppler effects and variance reduction to obtain adequate statistics for rare events. It is found that the effects of angular Coulomb scattering must be included in models of charged particle transport through heterogeneous materials.
Particle transport and gas feed during gun injection
International Nuclear Information System (INIS)
Fowler, T K.
1999-01-01
It is shown that ion and neutral transport during gun injection tends to equalize the density in the spheromak to that in the open-line current channel. Since a gun operating at or near the ion saturation current requires a minimum density, because of transport these gun requirements also determine a minimum density in the spheromak that increases as the field increases. Hence attaining high fields by gun injection sets lower limits on the density, which in turn limits the temperature of the plasma and increases its ohmic resistance. Estimates of these effects are given using 0-D models calibrated to CTX, as guidance to 2-D UEDGE calculations in progress. For gun power levels in SSPX and the Pulsed Spheromak reactor, we find that buildup persists to the highest field levels of interest
International Nuclear Information System (INIS)
Kotiluoto, P.
2007-05-01
A new deterministic three-dimensional neutral and charged particle transport code, MultiTrans, has been developed. In the novel approach, the adaptive tree multigrid technique is used in conjunction with simplified spherical harmonics approximation of the Boltzmann transport equation. The development of the new radiation transport code started in the framework of the Finnish boron neutron capture therapy (BNCT) project. Since the application of the MultiTrans code to BNCT dose planning problems, the testing and development of the MultiTrans code has continued in conventional radiotherapy and reactor physics applications. In this thesis, an overview of different numerical radiation transport methods is first given. Special features of the simplified spherical harmonics method and the adaptive tree multigrid technique are then reviewed. The usefulness of the new MultiTrans code has been indicated by verifying and validating the code performance for different types of neutral and charged particle transport problems, reported in separate publications. (orig.)
Particle transport in a wave spectrum with a thermal distribution of Larmor radii
Martinell, Julio; Kryukov, Nikolay; Del Castillo-Negrete, Diego
2017-10-01
Test particle E × B transport is studied due to an infinite spectrum of drift waves in two dimensions using a Hamiltonian approach, which can be reduced to a 2D mapping. Finite Larmor radius (FLR) effects are included taking a gyroaverage. When the wave amplitude is increased there is a gradual transition to chaos but the chaos level is reduced when FLR grows, implying that fast particles are better confined. The fraction of confined particles is found to be reduced as the wave amplitude rises. The statistical properties of transport are studied finding that, in the absence of a background flow, it is diffusive with a Gaussian PDF, when all particles have the same FLR. In contrast, for a thermal FLR distribution, the PDF is non-Gaussian but the transport remains diffusive. A theoretical explanation of this is given showing that a superposition of Gaussians produces a PDF with long tails. When a background flow is introduced that varies monotonically with radius, the transport becomes strongly super-diffusive due to the appearance of long Levy flights which dominate the particles. The PDF develops long tails as the flow strength is increased. The particle variance scales as σ t3 for chaotic regime but reduces to ballistic ( t2) for low chaos. Work funded by PAPIIT-UNAM project IN109115.
Energy Technology Data Exchange (ETDEWEB)
Wan Chan Tseung, H; Ma, J; Beltran, C [Mayo Clinic, Rochester, MN (United States)
2014-06-15
Purpose: To build a GPU-based Monte Carlo (MC) simulation of proton transport with detailed modeling of elastic and non-elastic (NE) protonnucleus interactions, for use in a very fast and cost-effective proton therapy treatment plan verification system. Methods: Using the CUDA framework, we implemented kernels for the following tasks: (1) Simulation of beam spots from our possible scanning nozzle configurations, (2) Proton propagation through CT geometry, taking into account nuclear elastic and multiple scattering, as well as energy straggling, (3) Bertini-style modeling of the intranuclear cascade stage of NE interactions, and (4) Simulation of nuclear evaporation. To validate our MC, we performed: (1) Secondary particle yield calculations in NE collisions with therapeutically-relevant nuclei, (2) Pencil-beam dose calculations in homogeneous phantoms, (3) A large number of treatment plan dose recalculations, and compared with Geant4.9.6p2/TOPAS. A workflow was devised for calculating plans from a commercially available treatment planning system, with scripts for reading DICOM files and generating inputs for our MC. Results: Yields, energy and angular distributions of secondaries from NE collisions on various nuclei are in good agreement with the Geant4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%–2mm for 70–230 MeV pencil-beam dose distributions in water, soft tissue, bone and Ti phantoms is 100%. The pass rate at 2%–2mm for treatment plan calculations is typically above 98%. The net computational time on a NVIDIA GTX680 card, including all CPU-GPU data transfers, is around 20s for 1×10{sup 7} proton histories. Conclusion: Our GPU-based proton transport MC is the first of its kind to include a detailed nuclear model to handle NE interactions on any nucleus. Dosimetric calculations demonstrate very good agreement with Geant4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil
Solar Energetic Particle Transport Near a Heliospheric Current Sheet
Energy Technology Data Exchange (ETDEWEB)
Battarbee, Markus; Dalla, Silvia [Jeremiah Horrocks Institute, University of Central Lancashire, PR1 2HE (United Kingdom); Marsh, Mike S., E-mail: mbattarbee@uclan.ac.uk [Met Office, Exeter, EX1 3 PB (United Kingdom)
2017-02-10
Solar energetic particles (SEPs), a major component of space weather, propagate through the interplanetary medium strongly guided by the interplanetary magnetic field (IMF). In this work, we analyze the implications that a flat Heliospheric Current Sheet (HCS) has on proton propagation from SEP release sites to the Earth. We simulate proton propagation by integrating fully 3D trajectories near an analytically defined flat current sheet, collecting comprehensive statistics into histograms, fluence maps, and virtual observer time profiles within an energy range of 1–800 MeV. We show that protons experience significant current sheet drift to distant longitudes, causing time profiles to exhibit multiple components, which are a potential source of confusing interpretations of observations. We find that variation of the current sheet thickness within a realistic parameter range has little effect on particle propagation. We show that the IMF configuration strongly affects the deceleration of protons. We show that in our model, the presence of a flat equatorial HCS in the inner heliosphere limits the crossing of protons into the opposite hemisphere.
Kalyagina, N.; Loschenov, V.; Wolf, D.; Daul, C.; Blondel, W.; Savelieva, T.
2011-11-01
We have investigated the influence of scatterer size changes on the laser light diffusion, induced by collimated monochromatic laser irradiation, in tissue-like optical phantoms using diffuse-reflectance imaging. For that purpose, three-layer optical phantoms were prepared, in which nano- and microsphere size varied in order to simulate the scattering properties of healthy and cancerous urinary bladder walls. The informative areas of the surface diffuse-reflected light distributions were about 15×18 pixels for the smallest scattering particles of 0.05 μm, about 21×25 pixels for the medium-size particles of 0.53 μm, and about 25×30 pixels for the largest particles of 5.09 μm. The computation of the laser spot areas provided useful information for the analysis of the light distribution with high measurement accuracy of up to 92%. The minimal stability of 78% accuracy was observed for superficial scattering signals on the phantoms with the largest particles. The experimental results showed a good agreement with the results obtained by the Monte Carlo simulations. The presented method shows a good potential to be useful for a tissue-state diagnosis of the urinary bladder.
Modeling particle transport and discoloration risk in drinking water distribution networks
van Summeren, Joost; Blokker, Mirjam
2017-10-01
Discoloration of drinking water is a worldwide phenomenon caused by accumulation and subsequent remobilization of particulate matter in drinking water distribution systems (DWDSs). It contributes a substantial fraction of customer complaints to water utilities. Accurate discoloration risk predictions could improve system operation by allowing for more effective programs on cleaning and prevention actions and field measurements, but are challenged by incomplete understanding on the origins and properties of particles and a complex and not fully understood interplay of processes in distribution networks. In this paper, we assess and describe relevant hydraulic processes that govern particle transport in turbulent pipe flow, including gravitational settling, bed-load transport, and particle entrainment into suspension. We assess which transport mechanisms are dominant for a range of bulk flow velocities, particle diameters, and particle mass densities, which includes common conditions for DWDSs in the Netherlands, the UK, and Australia. Our analysis shows that the theoretically predicted particle settling velocity and threshold shear stresses for incipient particle motion are in the same range as, but more variable than, previous estimates from lab experiments, field measurements, and modeling. The presented material will be used in the future development of a numerical modeling tool to determine and predict the spatial distribution of particulate material and discoloration risk in DWDSs. Our approach is aimed at understanding specific causalities and processes, which can complement data-driven approaches.
Kumar, Pramod; Gupta, N C
2016-01-15
A public health concern is to understand the linkages between specific pollution sources and adverse health impacts. Commuting can be viewed as one of the significant-exposure activity in high-vehicle density areas. This paper investigates the commuter exposure to inhalable, thoracic and alveolic particles in various transportation modes in Delhi, India. Air pollution levels are significantly contributed by automobile exhaust and also in-vehicle exposure can be higher sometime than ambient levels. Motorcycle, auto rickshaw, car and bus were selected to study particles concentration along two routes in Delhi between Kashmere Gate and Dwarka. The bus and auto rickshaw were running on compressed natural gas (CNG) while the car and motorcycle were operated on gasoline fuel. Aerosol spectrometer was employed to measure inhalable, thoracic and alveolic particles during morning and evening rush hours for five weekdays. From the study, we observed that the concentration levels of these particles were greatly influenced by transportation modes. Concentrations of inhalable particles were found higher during morning in auto rickshaw (332.81 ± 90.97 μg/m(3)) while the commuter of bus exhibited higher exposure of thoracic particles (292.23 ± 110.45 μg/m(3)) and car commuters were exposed to maximum concentrations of alveolic particles (222.37 ± 26.56 μg/m(3)). We observed that in evening car commuters experienced maximum concentrations of all sizes of particles among the four commuting modes. Interestingly, motorcycle commuters were exposed to lower levels of inhalable and thoracic particles during morning and evening hours as compared to other modes of transport. The mean values were found greater than the median values for all the modes of transport suggesting that positive skewed distributions are characteristics of naturally occurring phenomenon. Copyright © 2015 Elsevier B.V. All rights reserved.
Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas
Energy Technology Data Exchange (ETDEWEB)
Diamond, P.H.; Lin, Z.; Wang, W.; Horton, W.; Klasky, S.; Decyk, V.; Ma, K.-L.; Chames, J.; Adams, M.
2011-09-21
The three-year project GPS-TTBP resulted in over 152 publications and 135 presentations. This summary focuses on the scientific progress made by the project team. A major focus of the project was on the physics intrinsic rotation in tokamaks. Progress included the first ever flux driven study of net intrinsic spin-up, mediated by boundary effects (in collaboration with CPES), detailed studies of the microphysics origins of the Rice scaling, comparative studies of symmetry breaking mechanisms, a pioneering study of intrinsic torque driven by trapped electron modes, and studies of intrinsic rotation generation as a thermodynamic engine. Validation studies were performed with C-Mod, DIII-D and CSDX. This work resulted in very successful completion of the FY2010 Theory Milestone Activity for OFES, and several prominent papers of the 2008 and 2010 IAEA Conferences. A second major focus was on the relation between zonal flow formation and transport non-locality. This culminated in the discovery of the ExB staircase - a conceptually new phenomenon. This also makes useful interdisciplinary contact with the physics of the PV staircase, well-known in oceans and atmospheres. A third topic where progress was made was in the simulation and theory of turbulence spreading. This work, now well cited, is important for understanding the dynamics of non-locality in turbulent transport. Progress was made in studies of conjectured non-diffusive transport in trapped electron turbulence. Pioneering studies of ITB formation, coupling to intrinsic rotation and hysteresis were completed. These results may be especially significant for future ITER operation. All told, the physics per dollar performance of this project was quite good. The intense focus was beneficial and SciDAC resources were essential to its success.
Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas
International Nuclear Information System (INIS)
Diamond, P.H.; Lin, Z.; Wang, W.; Horton, W.; Klasky, S.; Decyk, V.; Ma, K.-L.; Chames, J.; Adams, M.
2011-01-01
The three-year project GPS-TTBP resulted in over 152 publications and 135 presentations. This summary focuses on the scientific progress made by the project team. A major focus of the project was on the physics intrinsic rotation in tokamaks. Progress included the first ever flux driven study of net intrinsic spin-up, mediated by boundary effects (in collaboration with CPES), detailed studies of the microphysics origins of the Rice scaling, comparative studies of symmetry breaking mechanisms, a pioneering study of intrinsic torque driven by trapped electron modes, and studies of intrinsic rotation generation as a thermodynamic engine. Validation studies were performed with C-Mod, DIII-D and CSDX. This work resulted in very successful completion of the FY2010 Theory Milestone Activity for OFES, and several prominent papers of the 2008 and 2010 IAEA Conferences. A second major focus was on the relation between zonal flow formation and transport non-locality. This culminated in the discovery of the ExB staircase - a conceptually new phenomenon. This also makes useful interdisciplinary contact with the physics of the PV staircase, well-known in oceans and atmospheres. A third topic where progress was made was in the simulation and theory of turbulence spreading. This work, now well cited, is important for understanding the dynamics of non-locality in turbulent transport. Progress was made in studies of conjectured non-diffusive transport in trapped electron turbulence. Pioneering studies of ITB formation, coupling to intrinsic rotation and hysteresis were completed. These results may be especially significant for future ITER operation. All told, the physics per dollar performance of this project was quite good. The intense focus was beneficial and SciDAC resources were essential to its success.
Particle trapping and beam transport issues in laser driven accelerators
Gwenael, Fubiani; Wim, Leemans; Eric, Esarey
2000-10-01
The LWFA and colliding pulses [1][2] sheme are capable of producing very compact electron bunches where the longitudinal size is much smaller than the transverse size. In this case, even if the electrons are relativistic, space charge force can affect the longitudinal and transverse bunch properties [3][4]. In the Self-modulated regime and the colliding pulse sheme, electrons are trapped from the background plasma and rapidly accelerated. We present theoretical studies of the generation and transport of electron bunches in LWFAs. The space charge effect induced in the bunch is modelled assuming the bunch is ellipsoid like. Beam transport in vacuum, comparison between gaussian and waterbag distribution, comparison between envelope model and PIC simulation will be discussed. This work is supported by the Director, Office of Science, Office of High Energy & Nuclear Physics, High Energy Physics Division, of the U.S Department of Energy, under Contract No. DE-AC03-76SF00098 [1]E.Esarey et al.,IEEE Trans. Plasma Sci. PS-24,252 (1996); W.P. Leemans et al, ibidem, 331. [2]D. Umstadter et al., Phys. Rev. Lett. 76, 2073 (1996); E.Esarey et al., Phys. Rev. Lett. 79, 2682 (1997); C.B Schroeder et al., Phys. Rev. E59, 6037 (1999) [3]DESY M87-161 (1987); DESY M88-013 (1988) [4] R.W. Garnett and T.P Wangler, IEEE Part. Acce. Conf. (1991)
Nonlinear heat and particle transport due to collisional drift waves
Energy Technology Data Exchange (ETDEWEB)
Nishi-Kawa, K.I.; Hatori, T.; Terashima, Y.
1978-07-01
A nonlinear analysis of collisional drift instability is developed in a slab model based on the two fluid equations, where inhomogeneities in electron and ion temperatures and unperturbed current are included in addition to ion inertia, finite ion gyroradius, and viscosity. A systematic expansion is introduced by taking epsilon=vertical-barkappavertical-barl as a smallness parameter, where kappa is the degree of density gradient and l is the linear scale of the slab along the density gradient. The nonlinear development of the drift wave near marginal stability is studied on the basis of the model equations. A new feature, hard excitation, has been found, which is due to the effects of the nonlinear frequency shift and the electron temperature gradient. The saturation amplitude is calculated, and the expressions for wave-associated particle and heat fluxes are obtained. A comparison of the expressions with the experimental results of a stellerator device is also made.
International Nuclear Information System (INIS)
Bergmann, Ryan M.; Vujić, Jasmina L.
2015-01-01
Highlights: • WARP, a GPU-accelerated Monte Carlo neutron transport code, has been developed. • The NVIDIA OptiX high-performance ray tracing library is used to process geometric data. • The unionized cross section representation is modified for higher performance. • Reference remapping is used to keep the GPU busy as neutron batch population reduces. • Reference remapping is done using a key-value radix sort on neutron reaction type. - Abstract: In recent supercomputers, general purpose graphics processing units (GPGPUs) are a significant faction of the supercomputer’s total computational power. GPGPUs have different architectures compared to central processing units (CPUs), and for Monte Carlo neutron transport codes used in nuclear engineering to take advantage of these coprocessor cards, transport algorithms must be changed to execute efficiently on them. WARP is a continuous energy Monte Carlo neutron transport code that has been written to do this. The main thrust of WARP is to adapt previous event-based transport algorithms to the new GPU hardware; the algorithmic choices for all parts of which are presented in this paper. It is found that remapping history data references increases the GPU processing rate when histories start to complete. The main reason for this is that completed data are eliminated from the address space, threads are kept busy, and memory bandwidth is not wasted on checking completed data. Remapping also allows the interaction kernels to be launched concurrently, improving efficiency. The OptiX ray tracing framework and CUDPP library are used for geometry representation and parallel dataset-side operations, ensuring high performance and reliability
International Nuclear Information System (INIS)
Wang, G. Q.; Ma, J.; Weiland, J.; Zang, Q.
2013-01-01
We have made the first drift wave study of particle transport in the Experimental Advanced Superconducting Tokamak (Wan et al., Nucl. Fusion 49, 104011 (2009)). The results reveal that collisions make the particle flux more inward in the high collisionality regime. This can be traced back to effects that are quadratic in the collision frequency. The particle pinch is due to electron trapping which is not very efficient in the high collisionality regime so the approach to equilibrium is slow. We have included also the electron temperature gradient (ETG) mode to give the right electron temperature gradient, since the Trapped Electron Mode (TE mode) is weak in this regime. However, at the ETG mode number ions are Boltzmann distributed so the ETG mode does not give particle transport
Control of alpha particle transport by spatially inhomogeneous ion cyclotron resonance heating
International Nuclear Information System (INIS)
Chang, C.S.; Imre, K.; Weitzner, H.; Colestock, P.
1990-02-01
Control of the radial alpha particle transport by using Ion Cyclotron Range of Frequency waves is investigated in a large-aspect-ratio tokamak geometry. It is shown that spatially inhomogeneous ICRF-wave energy with properly selected frequencies and wave numbers can induce fast convective transport of alpha particles at the speed of order υ alpha ∼ (P RF /n α ε 0 ) ρ p , where P RF is the ICRF-wave power density, n α is the alpha density, ε 0 is the alpha birth energy, and ρ p is the poloidal gyroradius of alpha particles at the birth energy. Application to ITER plasmas is studied and possible antenna designs to control alpha particle flux are discussed. 8 refs., 3 figs
Monte Carlo advances for the Eolus Asci Project
International Nuclear Information System (INIS)
Hendrick, J. S.; McKinney, G. W.; Cox, L. J.
2000-01-01
The Eolus ASCI project includes parallel, 3-D transport simulation for various nuclear applications. The codes developed within this project provide neutral and charged particle transport, detailed interaction physics, numerous source and tally capabilities, and general geometry packages. One such code is MCNPW which is a general purpose, 3-dimensional, time-dependent, continuous-energy Monte Carlo fully-coupled N-Particle transport code. Significant advances are also being made in the areas of modern software engineering and parallel computing. These advances are described in detail
Physical considerations relevant to HZE-particle transport in matter.
Schimmerling, W
1988-06-01
High-energy, highly charged (HZE) heavy nuclei may seem at first sight to be an exotic type of radiation, only remotely connected with nuclear power generation. On closer examination it becomes evident that heavy-ion accelerators are being seriously considered for driving inertial confinement fusion reactors, and high-energy heavy nuclei in the cosmic radiation are likely to place significant constraints on satellite power system deployment and space-based power generation. The use of beams of heavy nuclei in an increasing number of current applications, as well as their importance for the development of the state of the art of the future, makes it necessary to develop at the same time a good understanding of their transport through matter.
van Thienen, P; Vreeburg, J H G; Blokker, E J M
2011-02-01
Various particle transport mechanisms play a role in the build-up of discoloration potential in drinking water distribution networks. In order to enhance our understanding of and ability to predict this build-up, it is essential to recognize and understand their role. Gravitational settling with drag has primarily been considered in this context. However, since flow in water distribution pipes is nearly always in the turbulent regime, turbulent processes should be considered also. In addition to these, single particle effects and forces may affect radial particle transport. In this work, we present an application of a previously published turbulent particle deposition theory to conditions relevant for drinking water distribution systems. We predict quantitatively under which conditions turbophoresis, including the virtual mass effect, the Saffman lift force, and the Magnus force may contribute significantly to sediment transport in radial direction and compare these results to experimental observations. The contribution of turbophoresis is mostly limited to large particles (>50 μm) in transport mains, and not expected to play a major role in distribution mains. The Saffman lift force may enhance this process to some degree. The Magnus force is not expected to play any significant role in drinking water distribution systems. © 2010 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Peeler, Christopher R; Titt, Uwe
2012-01-01
In spot-scanning intensity-modulated proton therapy, numerous unmodulated proton beam spots are delivered over a target volume to produce a prescribed dose distribution. To accurately model field size-dependent output factors for beam spots, the energy deposition at positions radial to the central axis of the beam must be characterized. In this study, we determined the difference in the central axis dose for spot-scanned fields that results from secondary particle doses by investigating energy deposition radial to the proton beam central axis resulting from primary protons and secondary particles for mathematical point source and distributed source models. The largest difference in the central axis dose from secondary particles resulting from the use of a mathematical point source and a distributed source model was approximately 0.43%. Thus, we conclude that the central axis dose for a spot-scanned field is effectively independent of the source model used to calculate the secondary particle dose. (paper)
Mao, Zirui; Liu, G. R.
2018-02-01
The behavior of lunar dust on the Moon surface is quite complicated compared to that on the Earth surface due to the small lunar gravity and the significant influence of the complicated electrostatic filed in the Universe. Understanding such behavior is critical for the exploration of the Moon. This work develops a smoothed particle hydrodynamics (SPH) model with the elastic-perfectly plastic constitutive equation and Drucker-Prager yield criterion to simulate the electrostatic transporting of multiple charged lunar dust particles. The initial electric field is generated based on the particle-in-cell method and then is superposed with the additional electric field from the charged dust particles to obtain the resultant electric field in the following process. Simulations of cohesive soil's natural failure and electrostatic transport of charged soil under the given electric force and gravity were carried out using the SPH model. Results obtained in this paper show that the negatively charged dust particles levitate and transport to the shadow area with a higher potential from the light area with a lower potential. The motion of soil particles finally comes to a stable state. The numerical result for final distribution of soil particles and potential profile above planar surface by the SPH method matches well with the experimental result, and the SPH solution looks sound in the maximum levitation height prediction of lunar dust under an uniform electric field compared to theoretical solution, which prove that SPH is a reliable method in describing the behavior of soil particles under a complicated electric field and small gravity field with the consideration of interactions among soil particles.
International Nuclear Information System (INIS)
Vilches, M.; Garcia-Pareja, S.; Guerrero, R.; Anguiano, M.; Lallena, A.M.
2007-01-01
The Monte Carlo simulation of the electron transport through thin slabs is studied with five general purpose codes: PENELOPE, GEANT3, GEANT4, EGSnrc and MCNPX. The different material foils analyzed in the old experiments of Kulchitsky and Latyshev [L.A. Kulchitsky, G.D. Latyshev, Phys. Rev. 61 (1942) 254] and Hanson et al. [A.O. Hanson, L.H. Lanzl, E.M. Lyman, M.B. Scott, Phys. Rev. 84 (1951) 634] are used to perform the comparison between the Monte Carlo codes. Non-negligible differences are observed in the angular distributions of the transmitted electrons obtained with the some of the codes. The experimental data are reasonably well described by EGSnrc, PENELOPE (v.2005) and GEANT4. A general good agreement is found for EGSnrc and PENELOPE (v.2005) in all the cases analyzed
Energy Technology Data Exchange (ETDEWEB)
Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain)]. E-mail: mvilches@ugr.es; Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)]. E-mail: garciapareja@gmail.com; Guerrero, R. [Servicio de Radiofisica, Hospital Universitario ' San Cecilio' , Avda. Dr. Oloriz, 16, E-18012 Granada (Spain)]. E-mail: rafael.guerrero.alcalde.sspa@juntadeandalucia.es; Anguiano, M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)]. E-mail: mangui@ugr.es; Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)]. E-mail: lallena@ugr.es
2007-01-15
The Monte Carlo simulation of the electron transport through thin slabs is studied with five general purpose codes: PENELOPE, GEANT3, GEANT4, EGSnrc and MCNPX. The different material foils analyzed in the old experiments of Kulchitsky and Latyshev [L.A. Kulchitsky, G.D. Latyshev, Phys. Rev. 61 (1942) 254] and Hanson et al. [A.O. Hanson, L.H. Lanzl, E.M. Lyman, M.B. Scott, Phys. Rev. 84 (1951) 634] are used to perform the comparison between the Monte Carlo codes. Non-negligible differences are observed in the angular distributions of the transmitted electrons obtained with the some of the codes. The experimental data are reasonably well described by EGSnrc, PENELOPE (v.2005) and GEANT4. A general good agreement is found for EGSnrc and PENELOPE (v.2005) in all the cases analyzed.
Kudryashov, Sergey I.; Allen, Susan D.
2005-04-01
Viscous air drag and diffusive Brownian motion result in unfavorable re-deposition of sub-micron spherical particle contaminants after their dry laser-assisted detachment from critical surfaces. Theoretical modeling and experimental results on particle transportation in air and thin variable liquid layers point out particle size and lift-off velocity as the most important parameters for efficient particle removal from critical substrates, while for smaller particles with lower inertia, lower lift-off distances and higher diffusion rates dry laser cleaning is less effective due to their fast diffusive redeposition back on these substrates. Under these circumstances one excellent option is the steam laser cleaning technique, when contaminating particles lift-off together with explosively boiling pre-deposited layer or separate micronsized droplets of a low-boiling liquid energy transfer medium and travel in the hydro- and gas-dynamic flow produced to much higher distances from the substrate irrespectively on particle size. Mechanical coupling of particles to the lifting-off liquid layer or separate droplets occurs via the known "inertial" mechanism and/or new mechanism of "dragging" contaminating particles off the substrate by the liquid environment, demonstrated for the first time in this work. Nearly 100% cleaning efficiencies and no any indication of re-deposition were observed for different particles in single-shot steam laser cleaning experiments. Another disadvantageous aspect of DLC is nearly linear increase of cleaning laser fluence with increasing inverse radius of contaminating particles. This circumstance may result in damage (melt-ing, ripples, ablation) of the critical surface at high laser fluences necessary for removal of smaller (nanometer-size) particles and, thus, imposes serious limitation on the operation range of DLC. Fortunately, SLC technique may be applied in such instances, providing cleaning at quite low cleaning laser fluences, which are
210Pb and 210Po as tracers of particle transport mechanisms on continental margins
International Nuclear Information System (INIS)
Radakovitch, O.; Heussner, S.; Biscaye, P.; Abassi, A.
1997-01-01
The natural radionuclides 210 Po and 210 Pb, members of the 238 U decay chain, are particularly helpful to the understanding of particle transport processes in the ocean. These isotopes were analysed on sediment trap particles collected during 3 one-year experiments on continental margins. In the Bay of Biscay (Northeastern Atlantic) and in the Gulf of Lion (Northwestern Mediterranean Sea) both as part of the French ECOMARGE programme, and in the Middle Atlantic Bight (Northwestern Atlantic) as part of the SEEP programme. They yielded great insights into scenarios of particle transfer at each site, mainly based on the spatial and temporal distribution of 210 Pb particulate concentrations and fluxes. (author)
Mechanism and Kinetics of the Formation and Transport of Aerosol Particles in the Lower Stratosphere
Aloyan, A. E.; Ermakov, A. N.; Arutyunyan, V. O.
2018-03-01
Field and laboratory observation data on aerosol particles in the lower stratosphere are considered. The microphysics of their formation, mechanisms of heterogeneous chemical reactions involving reservoir gases (e.g., HCl, ClONO2, etc.) and their kinetic characteristics are analyzed. A new model of global transport of gaseous and aerosol admixtures in the lower stratosphere is described. The preliminary results from a numerical simulation of the formation of sulfate particles of the Junge layer and particles of polar stratospheric clouds (PSCs, types Ia, Ib, and II) are presented, and their effect on the gas and aerosol composition is analyzed.
Review of heavy charged particle transport in MCNP6.2
Zieb, K.; Hughes, H. G.; James, M. R.; Xu, X. G.
2018-04-01
The release of version 6.2 of the MCNP6 radiation transport code is imminent. To complement the newest release, a summary of the heavy charged particle physics models used in the 1 MeV to 1 GeV energy regime is presented. Several changes have been introduced into the charged particle physics models since the merger of the MCNP5 and MCNPX codes into MCNP6. This paper discusses the default models used in MCNP6 for continuous energy loss, energy straggling, and angular scattering of heavy charged particles. Explanations of the physics models' theories are included as well.
Verbeke, Jérôme M.; Petit, Odile; Chebboubi, Abdelhazize; Litaize, Olivier
2018-01-01
Fission modeling in general-purpose Monte Carlo transport codes often relies on average nuclear data provided by international evaluation libraries. As such, only average fission multiplicities are available and correlations between fission neutrons and photons are missing. Whereas uncorrelated fission physics is usually sufficient for standard reactor core and radiation shielding calculations, correlated fission secondaries are required for specialized nuclear instrumentation and detector modeling. For coincidence counting detector optimization for instance, precise simulation of fission neutrons and photons that remain correlated in time from birth to detection is essential. New developments were recently integrated into the Monte Carlo transport code TRIPOLI-4 to model fission physics more precisely, the purpose being to access event-by-event fission events from two different fission models: FREYA and FIFRELIN. TRIPOLI-4 simulations can now be performed, either by connecting via an API to the LLNL fission library including FREYA, or by reading external fission event data files produced by FIFRELIN beforehand. These new capabilities enable us to easily compare results from Monte Carlo transport calculations using the two fission models in a nuclear instrumentation application. In the first part of this paper, broad underlying principles of the two fission models are recalled. We then present experimental measurements of neutron angular correlations for 252Cf(sf) and 240Pu(sf). The correlations were measured for several neutron kinetic energy thresholds. In the latter part of the paper, simulation results are compared to experimental data. Spontaneous fissions in 252Cf and 240Pu are modeled by FREYA or FIFRELIN. Emitted neutrons and photons are subsequently transported to an array of scintillators by TRIPOLI-4 in analog mode to preserve their correlations. Angular correlations between fission neutrons obtained independently from these TRIPOLI-4 simulations, using
Rodriguez, M.; Brualla, L.
2018-04-01
Monte Carlo simulation of radiation transport is computationally demanding to obtain reasonably low statistical uncertainties of the estimated quantities. Therefore, it can benefit in a large extent from high-performance computing. This work is aimed at assessing the performance of the first generation of the many-integrated core architecture (MIC) Xeon Phi coprocessor with respect to that of a CPU consisting of a double 12-core Xeon processor in Monte Carlo simulation of coupled electron-photonshowers. The comparison was made twofold, first, through a suite of basic tests including parallel versions of the random number generators Mersenne Twister and a modified implementation of RANECU. These tests were addressed to establish a baseline comparison between both devices. Secondly, through the p DPM code developed in this work. p DPM is a parallel version of the Dose Planning Method (DPM) program for fast Monte Carlo simulation of radiation transport in voxelized geometries. A variety of techniques addressed to obtain a large scalability on the Xeon Phi were implemented in p DPM. Maximum scalabilities of 84 . 2 × and 107 . 5 × were obtained in the Xeon Phi for simulations of electron and photon beams, respectively. Nevertheless, in none of the tests involving radiation transport the Xeon Phi performed better than the CPU. The disadvantage of the Xeon Phi with respect to the CPU owes to the low performance of the single core of the former. A single core of the Xeon Phi was more than 10 times less efficient than a single core of the CPU for all radiation transport simulations.
Analysis of ABCD-like law for charged-particle beam transport with transversal divergence
International Nuclear Information System (INIS)
Chen Baoxin; Zhang Aiju; Sun Biehe
2004-01-01
It is shown that the propagation of charged-particle beam can be made in complete analogy with the transmission of ellipse-Gaussian light beam in paraxial approximation. Based on this similarity, the ABCD-like law for charged-particle beam transport with transversal divergence is developed by means of the complex curvature radius of charged-particle beam in which its real part shows the beam characteristics of convergent and divergent and its imaginary part shows the beam radius. From this, charged-particle beam as a whole is thought of as a single ellipse Gaussian light-like beam whose emittance plays the role of wave-length. In particular, this analogy gives an insight that it is hopeful to attain possible coherent charged-particle beam in favorable accelerator environment. (authors)
China, Swarup; Alpert, Peter A.; Zhang, Bo; Schum, Simeon; Dzepina, Katja; Wright, Kendra; Owen, R. Chris; Fialho, Paulo; Mazzoleni, Lynn R.; Mazzoleni, Claudio; Knopf, Daniel A.
2017-03-01
Long-range transported free tropospheric particles can play a significant role on heterogeneous ice nucleation. Using optical and electron microscopy we examine the physicochemical characteristics of ice nucleating particles (INPs). Particles were collected on substrates from the free troposphere at the remote Pico Mountain Observatory in the Azores Islands, after long-range transport and aging over the Atlantic Ocean. We investigate four specific events to study the ice formation potential by the collected particles with different ages and transport patterns. We use single-particle analysis, as well as bulk analysis to characterize particle populations. Both analyses show substantial differences in particle composition between samples from the four events; in addition, single-particle microscopy analysis indicates that most particles are coated by organic material. The identified INPs contained mixtures of dust, aged sea salt and soot, and organic material acquired either at the source or during transport. The temperature and relative humidity (RH) at which ice formed, varied only by 5% between samples, despite differences in particle composition, sources, and transport patterns. We hypothesize that this small variation in the onset RH may be due to the coating material on the particles. This study underscores and motivates the need to further investigate how long-range transported and atmospherically aged free tropospheric particles impact ice cloud formation.
International Nuclear Information System (INIS)
Raskach, K.F.; Blyskavka, V; Kislitsyna, T.S.
2011-01-01
In this paper we apply Monte Carlo for calculating spatial distribution of sodium reactivity worth in the perspective Russian sodium-cooled fast reactor BN-1200. A special Monte Carlo technique applicable for calculating perturbations and derivatives of the effective multiplication factor is used. The numerical results obtained show that Monte Carlo has a good perspective to deal with such problems and to be used as a reference solution for engineering codes based on the diffusion approximation. They also allow to conclude that in the sodium blanket and in the neighboring region of the core the diffusion code used likely overestimates sodium reactivity worth. This conclusion has to be verified in future work. (author)
Kinetic Monte Carlo model of defect transport and irradiation effects in La-doped CeO2
International Nuclear Information System (INIS)
Oaks, Aaron; Yun Di; Ye Bei; Chen Weiying; Stubbins, James F.
2011-01-01
A generalized Kinetic Monte Carlo code was developed to study oxygen mobility in UO 2 type nuclear fuels, using lanthanum doped CeO 2 as a surrogate material. Molecular Statics simulations were performed using interatomic potentials for CeO 2 developed by Gotte, Minervini, and Sayle to calculate local configuration-dependent oxygen vacancy migration energies. Kinetic Monte Carlo simulations of oxygen vacancy diffusion were performed at varying lanthanum dopant concentrations using the developed generalized Kinetic Monte Carlo code and the calculated configuration-dependent migration energies. All three interatomic potentials were found to confirm the lanthanum trapping effect. The results of these simulations were compared with experimental data and the Gotte potential was concluded to yield the most realistic diffusivity curve.
Frank, Donya; Calantoni, Joseph
2017-05-01
Improved understanding of coastal hydrodynamics and morphology will lead to more effective mitigation measures that reduce fatalities and property damage caused by natural disasters such as hurricanes. We investigated sediment transport under oscillatory flow over flat and rippled beds with phase-separated stereoscopic Particle Image Velocimetry (PIV). Standard PIV techniques severely limit measurements at the fluid-sediment interface and do not allow for the observation of separate phases in multi-phase flow (e.g. sand grains in water). We have implemented phase-separated Particle Image Velocimetry by adding fluorescent tracer particles to the fluid in order to observe fluid flow and sediment transport simultaneously. While sand grains scatter 532 nm wavelength laser light, the fluorescent particles absorb 532 nm laser light and re-emit light at a wavelength of 584 nm. Optical long-pass filters with a cut-on wavelength of 550 nm were installed on two cameras configured to perform stereoscopic PIV to capture only the light emitted by the fluorescent tracer particles. A third high-speed camera was used to capture the light scattered by the sand grains allowing for sediment particle tracking via particle tracking velocimetry (PTV). Together, these overlapping, simultaneously recorded images provided sediment particle and fluid velocities at high temporal and spatial resolution (100 Hz sampling with 0.8 mm vector spacing for the 2D-3C fluid velocity field). Measurements were made under a wide range of oscillatory flows over flat and rippled sand beds. The set of observations allow for the investigation of the relative importance of pressure gradients and shear stresses on sediment transport.
International Nuclear Information System (INIS)
Steger, U.
1990-01-01
The paper shows that particle losses can be concentrated in very limited areas by inserting orifices at appropriate positions in the ring. Consequences for shielding measures are discussed.The considerations and calculations were made for the cooler synchroton COSY under construction at the Juelich research center. It consists of an accelerator and storage ring for light particles, in particular protons, with a maximum energy of ∼ 2.5 GeV. From the large number of possible working positions, a suitable one for the recirculation phase with a target at place TP2 was chosen. Orifice and shielding optimization was dealt with exemplarily. (orig.) [de
From Mechanical Motion to Brownian Motion, Thermodynamics and Particle Transport Theory
Bringuier, E.
2008-01-01
The motion of a particle in a medium is dealt with either as a problem of mechanics or as a transport process in non-equilibrium statistical physics. The two kinds of approach are often unrelated as they are taught in different textbooks. The aim of this paper is to highlight the link between the mechanical and statistical treatments of particle…
A derivation of Akcasu's 'MLP' equations for 1-D particle transport in stochastic media
International Nuclear Information System (INIS)
Larsen, E. W.; Prinja, A. K.
2007-01-01
This paper presents a new derivation of Akcasu's Modified Levermore-Pomraning (MLP) model for estimating the ensemble-averaged angular flux for particle transport problems in 1-D geometrically random media. The significant new feature of the MLP equations is that, unlike the earlier Levermore-Pomraning (LP) model, the MLP equations are exact for certain classes of problems with scattering. (authors)
Light transport through disordered layers of dense gallium arsenide submicron particles
Van der Beek, T.; Barthelemy, P.J.C.; Johnson, P.M.; Wiersma, D.S.; Lagendijk, A.
2012-01-01
We present a study of optical transport properties of powder layers with submicrometer, strongly scattering gallium arsenide (GaAs) particles. Uniform, thin samples with well controlled thicknesses were created through the use of varying grinding times, sedimentation fractionation, annealing, and a
International Nuclear Information System (INIS)
Thomas, Edward Jr.; Williams, Jeremiah D.; Silver, Jennifer
2004-01-01
Over the past 5 years, two-dimensional particle image velocimetry (PIV) techniques [E. Thomas, Jr., Phys. Plasmas 6, 2672 (1999)] have been used to obtain detailed measurements of microparticle transport in dusty plasmas. This Letter reports on an extension of these techniques to a three-dimensional velocity vector measurement approach using stereoscopic PIV. Initial measurements using the stereoscopic PIV diagnostic are presented
Methodologies for Removing/Desorbing and Transporting Particles from Surfaces to Instrumentation
Miller, Carla J.; Cespedes, Ernesto R.
2012-12-01
Explosive trace detection (ETD) continues to be a key technology supporting the fight against terrorist bombing threats. Very selective and sensitive ETD instruments have been developed to detect explosive threats concealed on personnel, in vehicles, in luggage, and in cargo containers, as well as for forensic analysis (e.g. post blast inspection, bomb-maker identification, etc.) in a broad range of homeland security, law enforcement, and military applications. A number of recent studies have highlighted the fact that significant improvements in ETD systems' capabilities will be achieved, not by increasing the selectivity/sensitivity of the sensors, but by improved techniques for particle/vapor sampling, pre-concentration, and transport to the sensors. This review article represents a compilation of studies focused on characterizing the adhesive properties of explosive particles, the methodologies for removing/desorbing these particles from a range of surfaces, and approaches for transporting them to the instrument. The objectives of this review are to summarize fundamental work in explosive particle characterization, to describe experimental work performed in harvesting and transport of these particles, and to highlight those approaches that indicate high potential for improving ETD capabilities.
Modeling Bimolecular Reactions and Transport in Porous Media Via Particle Tracking
Energy Technology Data Exchange (ETDEWEB)
Dong Ding; David Benson; Amir Paster; Diogo Bolster
2012-01-01
We use a particle-tracking method to simulate several one-dimensional bimolecular reactive transport experiments. In this numerical method, the reactants are represented by particles: advection and dispersion dominate the flow, and molecular diffusion dictates, in large part, the reactions. The particle/particle reactions are determined by a combination of two probabilities dictated by the physics of transport and energetics of reaction. The first is that reactant particles occupy the same volume over a short time interval. The second is the conditional probability that two collocated particles favorably transform into a reaction. The first probability is a direct physical representation of the degree of mixing in an advancing displacement front, and as such lacks empirical parameters except for the user-defined number of particles. This number can be determined analytically from concentration autocovariance, if this type of data is available. The simulations compare favorably to two physical experiments. In one, the concentration of product, 1,2-naphthoquinoe-4-aminobenzene (NQAB) from reaction between 1,2-naphthoquinone-4-sulfonic acid (NQS) and aniline (AN), was measured at the outflow of a column filled with glass beads at different times. In the other, the concentration distribution of reactants (CuSO_4 and EDTA^{4-}) and products (CuEDTA^{4-}) were quantified by snapshots of transmitted light through a column packed with cryloite sand. The thermodynamic rate coefficient in the latter experiment was 10^7 times greater than the former experiment, making it essentially instantaneous. When compared to the solution of the advection-dispersion-reaction equation (ADRE) with the well-mixed reaction coefficient, the experiments and the particle-tracking simulations showed on the order of 20% to 40% less overall product, which is attributed to poor mixing. The poor mixing also leads to higher product concentrations on the edges of the mixing zones, which the particle
Energy Technology Data Exchange (ETDEWEB)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.
International Nuclear Information System (INIS)
Cramer, S.N.
1984-01-01
The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described
Gyrokinetic Particle Simulation of Turbulent Transport in Burning Plasmas (GPS - TTBP) Final Report
Energy Technology Data Exchange (ETDEWEB)
Chame, Jacqueline
2011-05-27
The goal of this project is the development of the Gyrokinetic Toroidal Code (GTC) Framework and its applications to problems related to the physics of turbulence and turbulent transport in tokamaks,. The project involves physics studies, code development, noise effect mitigation, supporting computer science efforts, diagnostics and advanced visualizations, verification and validation. Its main scientific themes are mesoscale dynamics and non-locality effects on transport, the physics of secondary structures such as zonal flows, and strongly coherent wave-particle interaction phenomena at magnetic precession resonances. Special emphasis is placed on the implications of these themes for rho-star and current scalings and for the turbulent transport of momentum. GTC-TTBP also explores applications to electron thermal transport, particle transport; ITB formation and cross-cuts such as edge-core coupling, interaction of energetic particles with turbulence and neoclassical tearing mode trigger dynamics. Code development focuses on major initiatives in the development of full-f formulations and the capacity to simulate flux-driven transport. In addition to the full-f -formulation, the project includes the development of numerical collision models and methods for coarse graining in phase space. Verification is pursued by linear stability study comparisons with the FULL and HD7 codes and by benchmarking with the GKV, GYSELA and other gyrokinetic simulation codes. Validation of gyrokinetic models of ion and electron thermal transport is pursed by systematic stressing comparisons with fluctuation and transport data from the DIII-D and NSTX tokamaks. The physics and code development research programs are supported by complementary efforts in computer sciences, high performance computing, and data management.
International Nuclear Information System (INIS)
Hoogenboom, J.E.
2000-01-01
The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated