WorldWideScience

Sample records for carlo coupling technique

  1. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 19; Issue 8. Variational Monte Carlo Technique: Ground State Energies of Quantum Mechanical Systems. Sukanta Deb. General Article Volume 19 Issue 8 August 2014 pp 713-739 ...

  2. Variational Monte Carlo Technique

    Indian Academy of Sciences (India)

    ias

    on the development of nuclear weapons in Los Alamos ..... cantly improved the paper. ... Carlo simulations of solids, Reviews of Modern Physics, Vol.73, pp.33– ... The computer algorithms are usually based on a random seed that starts the ...

  3. Using the Monte Carlo Coupling Technique to Evaluate the Shielding Ability of a Modular Shielding House to Accommodate Spent-Fuel Transportable Storage Casks

    International Nuclear Information System (INIS)

    Ueki, Kohtaro; Kawakami, Kazuo; Shimizu, Daisuke

    2003-01-01

    The Monte Carlo coupling technique with the coordinate transformation is used to evaluate the shielding ability of a modular shielding house that accommodates four spent-fuel transportable storage casks for two units. The effective dose rate distributions can be obtained as far as 300 m from the center of the shielding house. The coupling technique is created with the Surface Source Write (SSW) card and the Surface Source Read/Coordinate Transformation (SSR/CRT) card in the MCNP 4C continuous energy Monte Carlo code as the 'SSW-SSR/CRT calculation system'. In the present Monte Carlo coupling calculation, the total effective dose rates 100, 200, and 300 m from the center of the shielding house are estimated to be 1.69, 0.285, and 0.0826 (μSv/yr per four casks), respectively. Accordingly, if the distance between the center of the shielding house and the site boundary of the storage facility is kept at >300 m, approximately 2400 casks are able to be accommodated in the modular shielding houses, under the Japanese severe criterion of 50 μSv/yr at the site boundary. The shielding house alone satisfies not only the technical conditions but also the economic requirements.It became evident that secondary gamma rays account for >60% of the effective total dose rate at all the calculated points around the shielding house, most of which are produced from the water in the steel-water-steel shielding system of the shielding house. The remainder of the dose rate comes mostly from neutrons; the fission product and 60 Co activation gamma rays account for small percentages. Accordingly, reducing the secondary gamma rays is critical to improving not only the shielding ability but also the radiation safety of the shielding house

  4. Monte Carlo techniques for analyzing deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-01-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  5. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  6. Elements of Monte Carlo techniques

    International Nuclear Information System (INIS)

    Nagarajan, P.S.

    2000-01-01

    The Monte Carlo method is essentially mimicking the real world physical processes at the microscopic level. With the incredible increase in computing speeds and ever decreasing computing costs, there is widespread use of the method for practical problems. The method is used in calculating algorithm-generated sequences known as pseudo random sequence (prs)., probability density function (pdf), test for randomness, extension to multidimensional integration etc

  7. Dynamic bounds coupled with Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rajabalinejad, M., E-mail: M.Rajabalinejad@tudelft.n [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands); Meester, L.E. [Delft Institute of Applied Mathematics, Delft University of Technology, Delft (Netherlands); Gelder, P.H.A.J.M. van; Vrijling, J.K. [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands)

    2011-02-15

    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.

  8. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  9. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs

  10. Response decomposition with Monte Carlo correlated coupling

    International Nuclear Information System (INIS)

    Ueki, T.; Hoogenboom, J.E.; Kloosterman, J.L.

    2001-01-01

    Particle histories that contribute to a detector response are categorized according to whether they are fully confined inside a source-detector enclosure or cross and recross the same enclosure. The contribution from the confined histories is expressed using a forward problem with the external boundary condition on the source-detector enclosure. The contribution from the crossing and recrossing histories is expressed as the surface integral at the same enclosure of the product of the directional cosine and the fluxes in the foregoing forward problem and the adjoint problem for the whole spatial domain. The former contribution can be calculated by a standard forward Monte Carlo. The latter contribution can be calculated by correlated coupling of forward and adjoint histories independently of the former contribution. We briefly describe the computational method and discuss its application to perturbation analysis for localized material changes. (orig.)

  11. Response decomposition with Monte Carlo correlated coupling

    Energy Technology Data Exchange (ETDEWEB)

    Ueki, T.; Hoogenboom, J.E.; Kloosterman, J.L. [Delft Univ. of Technology (Netherlands). Interfaculty Reactor Inst.

    2001-07-01

    Particle histories that contribute to a detector response are categorized according to whether they are fully confined inside a source-detector enclosure or cross and recross the same enclosure. The contribution from the confined histories is expressed using a forward problem with the external boundary condition on the source-detector enclosure. The contribution from the crossing and recrossing histories is expressed as the surface integral at the same enclosure of the product of the directional cosine and the fluxes in the foregoing forward problem and the adjoint problem for the whole spatial domain. The former contribution can be calculated by a standard forward Monte Carlo. The latter contribution can be calculated by correlated coupling of forward and adjoint histories independently of the former contribution. We briefly describe the computational method and discuss its application to perturbation analysis for localized material changes. (orig.)

  12. Variance Reduction Techniques in Monte Carlo Methods

    NARCIS (Netherlands)

    Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.

    2010-01-01

    Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the

  13. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Univ. of New Mexico, Albuquerque, NM

    2016-01-01

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  14. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  15. Dynamic bounds coupled with Monte Carlo simulations

    NARCIS (Netherlands)

    Rajabali Nejad, Mohammadreza; Meester, L.E.; van Gelder, P.H.A.J.M.; Vrijling, J.K.

    2011-01-01

    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper

  16. A fully coupled Monte Carlo/discrete ordinates solution to the neutron transport equation. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Randal Scott [Univ. of Arizona, Tucson, AZ (United States)

    1990-01-01

    The neutron transport equation is solved by a hybrid method that iteratively couples regions where deterministic (SN) and stochastic (Monte Carlo) methods are applied. Unlike previous hybrid methods, the Monte Carlo and SN regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor SN is well suited for by themselves. The fully coupled Monte Carlo/SN technique consists of defining spatial and/or energy regions of a problem in which either a Monte Carlo calculation or an SN calculation is to be performed. The Monte Carlo region may comprise the entire spatial region for selected energy groups, or may consist of a rectangular area that is either completely or partially embedded in an arbitrary SN region. The Monte Carlo and SN regions are then connected through the common angular boundary fluxes, which are determined iteratively using the response matrix technique, and volumetric sources. The hybrid method has been implemented in the SN code TWODANT by adding special-purpose Monte Carlo subroutines to calculate the response matrices and volumetric sources, and linkage subrountines to carry out the interface flux iterations. The common angular boundary fluxes are included in the SN code as interior boundary sources, leaving the logic for the solution of the transport flux unchanged, while, with minor modifications, the diffusion synthetic accelerator remains effective in accelerating SN calculations. The special-purpose Monte Carlo routines used are essentially analog, with few variance reduction techniques employed. However, the routines have been successfully vectorized, with approximately a factor of five increase in speed over the non-vectorized version.

  17. Stratified source-sampling techniques for Monte Carlo eigenvalue analysis

    International Nuclear Information System (INIS)

    Mohamed, A.

    1998-01-01

    In 1995, at a conference on criticality safety, a special session was devoted to the Monte Carlo ''Eigenvalue of the World'' problem. Argonne presented a paper, at that session, in which the anomalies originally observed in that problem were reproduced in a much simplified model-problem configuration, and removed by a version of stratified source-sampling. In this paper, stratified source-sampling techniques are generalized and applied to three different Eigenvalue of the World configurations which take into account real-world statistical noise sources not included in the model problem, but which differ in the amount of neutronic coupling among the constituents of each configuration. It is concluded that, in Monte Carlo eigenvalue analysis of loosely-coupled arrays, the use of stratified source-sampling reduces the probability of encountering an anomalous result over that if conventional source-sampling methods are used. However, this gain in reliability is substantially less than that observed in the model-problem results

  18. Stabilization effect of fission source in coupled Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Olsen, Borge; Dufek, Jan [Div. of Nuclear Reactor Technology, KTH Royal Institute of Technology, AlbaNova University Center, Stockholm (Sweden)

    2017-08-15

    A fission source can act as a stabilization element in coupled Monte Carlo simulations. We have observed this while studying numerical instabilities in nonlinear steady-state simulations performed by a Monte Carlo criticality solver that is coupled to a xenon feedback solver via fixed-point iteration. While fixed-point iteration is known to be numerically unstable for some problems, resulting in large spatial oscillations of the neutron flux distribution, we show that it is possible to stabilize it by reducing the number of Monte Carlo criticality cycles simulated within each iteration step. While global convergence is ensured, development of any possible numerical instability is prevented by not allowing the fission source to converge fully within a single iteration step, which is achieved by setting a small number of criticality cycles per iteration step. Moreover, under these conditions, the fission source may converge even faster than in criticality calculations with no feedback, as we demonstrate in our numerical test simulations.

  19. Diagrammatic Monte Carlo simulations of staggered fermions at finite coupling

    CERN Document Server

    Vairinhos, Helvio

    2016-01-01

    Diagrammatic Monte Carlo has been a very fruitful tool for taming, and in some cases even solving, the sign problem in several lattice models. We have recently proposed a diagrammatic model for simulating lattice gauge theories with staggered fermions at arbitrary coupling, which extends earlier successful efforts to simulate lattice QCD at finite baryon density in the strong-coupling regime. Here we present the first numerical simulations of our model, using worm algorithms.

  20. Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations

    Science.gov (United States)

    Hoogenboom, J. Eduard; Dufek, Jan

    2014-06-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.

  1. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E. [Delft University of Technology, Interfaculty Reactor Institute, Delft (Netherlands)

    2000-07-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  2. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    2000-01-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  3. Stabilization effect of fission source in coupled Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    Börge Olsen

    2017-08-01

    Full Text Available A fission source can act as a stabilization element in coupled Monte Carlo simulations. We have observed this while studying numerical instabilities in nonlinear steady-state simulations performed by a Monte Carlo criticality solver that is coupled to a xenon feedback solver via fixed-point iteration. While fixed-point iteration is known to be numerically unstable for some problems, resulting in large spatial oscillations of the neutron flux distribution, we show that it is possible to stabilize it by reducing the number of Monte Carlo criticality cycles simulated within each iteration step. While global convergence is ensured, development of any possible numerical instability is prevented by not allowing the fission source to converge fully within a single iteration step, which is achieved by setting a small number of criticality cycles per iteration step. Moreover, under these conditions, the fission source may converge even faster than in criticality calculations with no feedback, as we demonstrate in our numerical test simulations.

  4. A Monte Carlo Sampling Technique for Multi-phonon Processes

    Energy Technology Data Exchange (ETDEWEB)

    Hoegberg, Thure

    1961-12-15

    A sampling technique for selecting scattering angle and energy gain in Monte Carlo calculations of neutron thermalization is described. It is supposed that the scattering is separated into processes involving different numbers of phonons. The number of phonons involved is first determined. Scattering angle and energy gain are then chosen by using special properties of the multi-phonon term.

  5. Monte Carlo techniques in diagnostic and therapeutic nuclear medicine

    International Nuclear Information System (INIS)

    Zaidi, H.

    2002-01-01

    Monte Carlo techniques have become one of the most popular tools in different areas of medical radiation physics following the development and subsequent implementation of powerful computing systems for clinical use. In particular, they have been extensively applied to simulate processes involving random behaviour and to quantify physical parameters that are difficult or even impossible to calculate analytically or to determine by experimental measurements. The use of the Monte Carlo method to simulate radiation transport turned out to be the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides. There is broad consensus in accepting that the earliest Monte Carlo calculations in medical radiation physics were made in the area of nuclear medicine, where the technique was used for dosimetry modelling and computations. Formalism and data based on Monte Carlo calculations, developed by the Medical Internal Radiation Dose (MIRD) committee of the Society of Nuclear Medicine, were published in a series of supplements to the Journal of Nuclear Medicine, the first one being released in 1968. Some of these pamphlets made extensive use of Monte Carlo calculations to derive specific absorbed fractions for electron and photon sources uniformly distributed in organs of mathematical phantoms. Interest in Monte Carlo-based dose calculations with β-emitters has been revived with the application of radiolabelled monoclonal antibodies to radioimmunotherapy. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the medical physics

  6. A new effective Monte Carlo Midway coupling method in MCNP applied to a well logging problem

    Energy Technology Data Exchange (ETDEWEB)

    Serov, I.V.; John, T.M.; Hoogenboom, J.E

    1998-12-01

    The background of the Midway forward-adjoint coupling method including the black absorber technique for efficient Monte Carlo determination of radiation detector responses is described. The method is implemented in the general purpose MCNP Monte Carlo code. The utilization of the method is fairly straightforward and does not require any substantial extra expertise. The method was applied to a standard neutron well logging porosity tool problem. The results exhibit reliability and high efficiency of the Midway method. For the studied problem the efficiency gain is considerably higher than for a normal forward calculation, which is already strongly optimized by weight-windows. No additional effort is required to adjust the Midway model if the position of the detector or the porosity of the formation is changed. Additionally, the Midway method can be used with other variance reduction techniques if extra gain in efficiency is desired.

  7. Efficient Geometry and Data Handling for Large-Scale Monte Carlo - Thermal-Hydraulics Coupling

    Science.gov (United States)

    Hoogenboom, J. Eduard

    2014-06-01

    Detailed coupling of thermal-hydraulics calculations to Monte Carlo reactor criticality calculations requires each axial layer of each fuel pin to be defined separately in the input to the Monte Carlo code in order to assign to each volume the temperature according to the result of the TH calculation, and if the volume contains coolant, also the density of the coolant. This leads to huge input files for even small systems. In this paper a methodology for dynamical assignment of temperatures with respect to cross section data is demonstrated to overcome this problem. The method is implemented in MCNP5. The method is verified for an infinite lattice with 3x3 BWR-type fuel pins with fuel, cladding and moderator/coolant explicitly modeled. For each pin 60 axial zones are considered with different temperatures and coolant densities. The results of the axial power distribution per fuel pin are compared to a standard MCNP5 run in which all 9x60 cells for fuel, cladding and coolant are explicitly defined and their respective temperatures determined from the TH calculation. Full agreement is obtained. For large-scale application the method is demonstrated for an infinite lattice with 17x17 PWR-type fuel assemblies with 25 rods replaced by guide tubes. Again all geometrical detailed is retained. The method was used in a procedure for coupled Monte Carlo and thermal-hydraulics iterations. Using an optimised iteration technique, convergence was obtained in 11 iteration steps.

  8. Monte Carlo technique for local perturbations in multiplying systems

    International Nuclear Information System (INIS)

    Bernnat, W.

    1974-01-01

    The use of the Monte Carlo method for the calculation of reactivity perturbations in multiplying systems due to changes in geometry or composition requires a correlated sampling technique to make such calculations economical or in the case of very small perturbations even feasible. The technique discussed here is suitable for local perturbations. Very small perturbation regions will be treated by an adjoint mode. The perturbation of the source distribution due to the changed system and its reaction on the reactivity worth or other values of interest is taken into account by a fission matrix method. The formulation of the method and its application are discussed. 10 references. (U.S.)

  9. Three-dimensional coupled Monte Carlo-discrete ordinates computational scheme for shielding calculations of large and complex nuclear facilities

    International Nuclear Information System (INIS)

    Chen, Y.; Fischer, U.

    2005-01-01

    Shielding calculations of advanced nuclear facilities such as accelerator based neutron sources or fusion devices of the tokamak type are complicated due to their complex geometries and their large dimensions, including bulk shields of several meters thickness. While the complexity of the geometry in the shielding calculation can be hardly handled by the discrete ordinates method, the deep penetration of radiation through bulk shields is a severe challenge for the Monte Carlo particle transport technique. This work proposes a dedicated computational scheme for coupled Monte Carlo-Discrete Ordinates transport calculations to handle this kind of shielding problems. The Monte Carlo technique is used to simulate the particle generation and transport in the target region with both complex geometry and reaction physics, and the discrete ordinates method is used to treat the deep penetration problem in the bulk shield. The coupling scheme has been implemented in a program system by loosely integrating the Monte Carlo transport code MCNP, the three-dimensional discrete ordinates code TORT and a newly developed coupling interface program for mapping process. Test calculations were performed with comparison to MCNP solutions. Satisfactory agreements were obtained between these two approaches. The program system has been chosen to treat the complicated shielding problem of the accelerator-based IFMIF neutron source. The successful application demonstrates that coupling scheme with the program system is a useful computational tool for the shielding analysis of complex and large nuclear facilities. (authors)

  10. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  11. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  12. An efficient search method for finding the critical slip surface using the compositional Monte Carlo technique

    International Nuclear Information System (INIS)

    Goshtasbi, K.; Ahmadi, M; Naeimi, Y.

    2008-01-01

    Locating the critical slip surface and the associated minimum factor of safety are two complementary parts in a slope stability analysis. A large number of computer programs exist to solve slope stability problems. Most of these programs, however, have used inefficient and unreliable search procedures to locate the global minimum factor of safety. This paper presents an efficient and reliable method to determine the global minimum factor of safety coupled with a modified version of the Monte Carlo technique. Examples arc presented to illustrate the reliability of the proposed method

  13. Collimator performance evaluation by Monte-Carlo techniques

    International Nuclear Information System (INIS)

    Milanesi, L.; Bettinardi, V.; Bellotti, E.; Gilardi, M.C.; Todd-Pokropek, A.; Fazio, F.

    1985-01-01

    A computer program using Monte-Carlo techniques has been developed to simulate gamma camera collimator performance. Input data include hole length, septum thickness, hole size and shape, collimator material, source characteristics, source to collimator distance and medium, radiation energy, total events number. Agreement between Monte-Carlo simulations and experimental measurements was found for commercial hexagonal parallel hole collimators in terms of septal penetration, transfer function and sensitivity. The method was then used to rationalize collimator design for tomographic brain studies. A radius of ration of 15 cm was assumed. By keeping constant resolution at 15 cm (FWHM = 1.3.cm), SPECT response to a point source was obtained in scattering medium for three theoretical collimators. Sensitivity was maximized in the first collimator, uniformity of resolution response in the third, while the second represented a trade-off between the two. The high sensitivity design may be superior in the hot spot and/or low activity situation, while for distributed sources of high activity an uniform resolution response should be preferred. The method can be used to personalize collimator design to different clinical needs in SPECT

  14. Methods for coupling radiation, ion, and electron energies in grey Implicit Monte Carlo

    International Nuclear Information System (INIS)

    Evans, T.M.; Densmore, J.D.

    2007-01-01

    We present three methods for extending the Implicit Monte Carlo (IMC) method to treat the time-evolution of coupled radiation, electron, and ion energies. The first method splits the ion and electron coupling and conduction from the standard IMC radiation-transport process. The second method recasts the IMC equations such that part of the coupling is treated during the Monte Carlo calculation. The third method treats all of the coupling and conduction in the Monte Carlo simulation. We apply modified equation analysis (MEA) to simplified forms of each method that neglects the errors in the conduction terms. Through MEA we show that the third method is theoretically the most accurate. We demonstrate the effectiveness of each method on a series of 0-dimensional, nonlinear benchmark problems where the accuracy of the third method is shown to be up to ten times greater than the other coupling methods for selected calculations

  15. Error reduction techniques for Monte Carlo neutron transport calculations

    International Nuclear Information System (INIS)

    Ju, J.H.W.

    1981-01-01

    Monte Carlo methods have been widely applied to problems in nuclear physics, mathematical reliability, communication theory, and other areas. The work in this thesis is developed mainly with neutron transport applications in mind. For nuclear reactor and many other applications, random walk processes have been used to estimate multi-dimensional integrals and obtain information about the solution of integral equations. When the analysis is statistically based such calculations are often costly, and the development of efficient estimation techniques plays a critical role in these applications. All of the error reduction techniques developed in this work are applied to model problems. It is found that the nearly optimal parameters selected by the analytic method for use with GWAN estimator are nearly identical to parameters selected by the multistage method. Modified path length estimation (based on the path length importance measure) leads to excellent error reduction in all model problems examined. Finally, it should be pointed out that techniques used for neutron transport problems may be transferred easily to other application areas which are based on random walk processes. The transport problems studied in this dissertation provide exceptionally severe tests of the error reduction potential of any sampling procedure. It is therefore expected that the methods of this dissertation will prove useful in many other application areas

  16. Application of the perturbation series expansion quantum Monte Carlo method to multiorbital systems having Hund's coupling

    International Nuclear Information System (INIS)

    Sakai, Shiro; Arita, Ryotaro; Aoki, Hideo

    2006-01-01

    We propose a new quantum Monte Carlo method especially intended to couple with the dynamical mean-field theory. The algorithm is not only much more efficient than the conventional Hirsch-Fye algorithm, but is applicable to multiorbital systems having an SU(2)-symmetric Hund's coupling as well

  17. A Monte Carlo simulation technique to determine the optimal portfolio

    Directory of Open Access Journals (Sweden)

    Hassan Ghodrati

    2014-03-01

    Full Text Available During the past few years, there have been several studies for portfolio management. One of the primary concerns on any stock market is to detect the risk associated with various assets. One of the recognized methods in order to measure, to forecast, and to manage the existing risk is associated with Value at Risk (VaR, which draws much attention by financial institutions in recent years. VaR is a method for recognizing and evaluating of risk, which uses the standard statistical techniques and the method has been used in other fields, increasingly. The present study has measured the value at risk of 26 companies from chemical industry in Tehran Stock Exchange over the period 2009-2011 using the simulation technique of Monte Carlo with 95% confidence level. The used variability in the present study has been the daily return resulted from the stock daily price change. Moreover, the weight of optimal investment has been determined using a hybrid model called Markowitz and Winker model in each determined stocks. The results showed that the maximum loss would not exceed from 1259432 Rials at 95% confidence level in future day.

  18. Characterization of decommissioned reactor internals: Monte Carlo analysis technique

    International Nuclear Information System (INIS)

    Reid, B.D.; Love, E.F.; Luksic, A.T.

    1993-03-01

    This study discusses computer analysis techniques for determining activation levels of irradiated reactor component hardware to yield data for the Department of Energy's Greater-Than-Class C Low-Level Radioactive Waste Program. The study recommends the Monte Carlo Neutron/Photon (MCNP) computer code as the best analysis tool for this application and compares the technique to direct sampling methodology. To implement the MCNP analysis, a computer model would be developed to reflect the geometry, material composition, and power history of an existing shutdown reactor. MCNP analysis would then be performed using the computer model, and the results would be validated by comparison to laboratory analysis results from samples taken from the shutdown reactor. The report estimates uncertainties for each step of the computational and laboratory analyses; the overall uncertainty of the MCNP results is projected to be ±35%. The primary source of uncertainty is identified as the material composition of the components, and research is suggested to address that uncertainty

  19. A flexible coupling scheme for Monte Carlo and thermal-hydraulics codes

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard, E-mail: J.E.Hoogenboom@tudelft.nl [Delft University of Technology (Netherlands); Ivanov, Aleksandar; Sanchez, Victor, E-mail: Aleksandar.Ivanov@kit.edu, E-mail: Victor.Sanchez@kit.edu [Karlsruhe Institute of Technology, Institute of Neutron Physics and Reactor Technology, Eggenstein-Leopoldshafen (Germany); Diop, Cheikh, E-mail: Cheikh.Diop@cea.fr [CEA/DEN/DANS/DM2S/SERMA, Commissariat a l' Energie Atomique, Gif-sur-Yvette (France)

    2011-07-01

    A coupling scheme between a Monte Carlo code and a thermal-hydraulics code is being developed within the European NURISP project for comprehensive and validated reactor analysis. The scheme is flexible as it allows different Monte Carlo codes and different thermal-hydraulics codes to be used. At present the MCNP and TRIPOLI4 Monte Carlo codes can be used and the FLICA4 and SubChanFlow thermal-hydraulics codes. For all these codes only an original executable is necessary. A Python script drives the iterations between Monte Carlo and thermal-hydraulics calculations. It also calls a conversion program to merge a master input file for the Monte Carlo code with the appropriate temperature and coolant density data from the thermal-hydraulics calculation. Likewise it calls another conversion program to merge a master input file for the thermal-hydraulics code with the power distribution data from the Monte Carlo calculation. Special attention is given to the neutron cross section data for the various required temperatures in the Monte Carlo calculation. Results are shown for an infinite lattice of PWR fuel pin cells and a 3 x 3 fuel BWR pin cell cluster. Various possibilities for further improvement and optimization of the coupling system are discussed. (author)

  20. A flexible coupling scheme for Monte Carlo and thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard; Ivanov, Aleksandar; Sanchez, Victor; Diop, Cheikh

    2011-01-01

    A coupling scheme between a Monte Carlo code and a thermal-hydraulics code is being developed within the European NURISP project for comprehensive and validated reactor analysis. The scheme is flexible as it allows different Monte Carlo codes and different thermal-hydraulics codes to be used. At present the MCNP and TRIPOLI4 Monte Carlo codes can be used and the FLICA4 and SubChanFlow thermal-hydraulics codes. For all these codes only an original executable is necessary. A Python script drives the iterations between Monte Carlo and thermal-hydraulics calculations. It also calls a conversion program to merge a master input file for the Monte Carlo code with the appropriate temperature and coolant density data from the thermal-hydraulics calculation. Likewise it calls another conversion program to merge a master input file for the thermal-hydraulics code with the power distribution data from the Monte Carlo calculation. Special attention is given to the neutron cross section data for the various required temperatures in the Monte Carlo calculation. Results are shown for an infinite lattice of PWR fuel pin cells and a 3 x 3 fuel BWR pin cell cluster. Various possibilities for further improvement and optimization of the coupling system are discussed. (author)

  1. Application of variance reduction techniques of Monte-Carlo method to deep penetration shielding problems

    International Nuclear Information System (INIS)

    Rawat, K.K.; Subbaiah, K.V.

    1996-01-01

    General purpose Monte Carlo code MCNP is being widely employed for solving deep penetration problems by applying variance reduction techniques. These techniques depend on the nature and type of the problem being solved. Application of geometry splitting and implicit capture method are examined to study the deep penetration problems of neutron, gamma and coupled neutron-gamma in thick shielding materials. The typical problems chosen are: i) point isotropic monoenergetic gamma ray source of 1 MeV energy in nearly infinite water medium, ii) 252 Cf spontaneous source at the centre of 140 cm thick water and concrete and iii) 14 MeV fast neutrons incident on the axis of 100 cm thick concrete disk. (author). 7 refs., 5 figs

  2. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    International Nuclear Information System (INIS)

    Ivanov, A.; Sanchez, V.; Hoogenboom, J. E.

    2012-01-01

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  3. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, A.; Sanchez, V. [Karlsruhe Inst. of Technology, Inst. for Neutron Physics and Reactor Technology, Herman-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Hoogenboom, J. E. [Delft Univ. of Technology, Faculty of Applied Sciences, Mekelweg 15, 2629 JB Delft (Netherlands)

    2012-07-01

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  4. Development of three-dimensional program based on Monte Carlo and discrete ordinates bidirectional coupling method

    International Nuclear Information System (INIS)

    Han Jingru; Chen Yixue; Yuan Longjun

    2013-01-01

    The Monte Carlo (MC) and discrete ordinates (SN) are the commonly used methods in the design of radiation shielding. Monte Carlo method is able to treat the geometry exactly, but time-consuming in dealing with the deep penetration problem. The discrete ordinate method has great computational efficiency, but it is quite costly in computer memory and it suffers from ray effect. Single discrete ordinates method or single Monte Carlo method has limitation in shielding calculation for large complex nuclear facilities. In order to solve the problem, the Monte Carlo and discrete ordinates bidirectional coupling method is developed. The bidirectional coupling method is implemented in the interface program to transfer the particle probability distribution of MC and angular flux of discrete ordinates. The coupling method combines the advantages of MC and SN. The test problems of cartesian and cylindrical coordinate have been calculated by the coupling methods. The calculation results are performed with comparison to MCNP and TORT and satisfactory agreements are obtained. The correctness of the program is proved. (authors)

  5. BALTORO a general purpose code for coupling discrete ordinates and Monte-Carlo radiation transport calculations

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1983-01-01

    The general purpose code BALTORO was written for coupling the three-dimensional Monte-Carlo /MC/ with the one-dimensional Discrete Ordinates /DO/ radiation transport calculations. The quantity of a radiation-induced /neutrons or gamma-rays/ nuclear effect or the score from a radiation-yielding nuclear effect can be analysed in this way. (author)

  6. Deficiency in Monte Carlo simulations of coupled neutron-gamma-ray fields

    NARCIS (Netherlands)

    Maleka, Peane P.; Maucec, Marko; de Meijer, Robert J.

    2011-01-01

    The deficiency in Monte Carlo simulations of coupled neutron-gamma-ray field was investigated by benchmarking two simulation codes with experimental data. Simulations showed better correspondence with the experimental data for gamma-ray transport only. In simulations, the neutron interactions with

  7. Automated importance generation and biasing techniques for Monte Carlo shielding techniques by the TRIPOLI-3 code

    International Nuclear Information System (INIS)

    Both, J.P.; Nimal, J.C.; Vergnaud, T.

    1990-01-01

    We discuss an automated biasing procedure for generating the parameters necessary to achieve efficient Monte Carlo biasing shielding calculations. The biasing techniques considered here are exponential transform and collision biasing deriving from the concept of the biased game based on the importance function. We use a simple model of the importance function with exponential attenuation as the distance to the detector increases. This importance function is generated on a three-dimensional mesh including geometry and with graph theory algorithms. This scheme is currently being implemented in the third version of the neutron and gamma ray transport code TRIPOLI-3. (author)

  8. Continuous energy adjoint Monte Carlo for coupled neutron-photon transport

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E. [Delft Univ. of Technology (Netherlands). Interfaculty Reactor Inst.

    2001-07-01

    Although the theory for adjoint Monte Carlo calculations with continuous energy treatment for neutrons as well as for photons is known, coupled neutron-photon transport problems present fundamental difficulties because of the discrete energies of the photons produced by neutron reactions. This problem was solved by forcing the energy of the adjoint photon to the required discrete value by an adjoint Compton scattering reaction or an adjoint pair production reaction. A mathematical derivation shows the exact procedures to follow for the generation of an adjoint neutron and its statistical weight. A numerical example demonstrates that correct detector responses are obtained compared to a standard forward Monte Carlo calculation. (orig.)

  9. Coupling photon Monte Carlo simulation and CAD software. Application to X-ray nondestructive evaluation

    International Nuclear Information System (INIS)

    Tabary, J.; Gliere, A.

    2001-01-01

    A Monte Carlo radiation transport simulation program, EGS Nova, and a computer aided design software, BRL-CAD, have been coupled within the framework of Sindbad, a nondestructive evaluation (NDE) simulation system. In its current status, the program is very valuable in a NDE laboratory context, as it helps simulate the images due to the uncollided and scattered photon fluxes in a single NDE software environment, without having to switch to a Monte Carlo code parameters set. Numerical validations show a good agreement with EGS4 computed and published data. As the program's major drawback is the execution time, computational efficiency improvements are foreseen. (orig.)

  10. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics

    International Nuclear Information System (INIS)

    Seker, V.; Thomas, J.W.; Downar, T.J.

    2007-01-01

    A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k eff and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport

  11. Efficient shortcut techniques in evanescently coupled waveguides

    Science.gov (United States)

    Paul, Koushik; Sarma, Amarendra K.

    2016-10-01

    Shortcut to Adiabatic Passage (SHAPE) technique, in the context of coherent control of atomic systems has gained considerable attention in last few years. It is primarily because of its ability to manipulate population among the quantum states infinitely fast compared to the adiabatic processes. Two methods in this regard have been explored rigorously, namely the transitionless quantum driving and the Lewis-Riesenfeld invariant approach. We have applied these two methods to realize SHAPE in adiabatic waveguide coupler. Waveguide couplers are integral components of photonic circuits, primarily used as switching devices. Our study shows that with appropriate engineering of the coupling coefficient and propagation constants of the coupler it is possible to achieve efficient and complete power switching. We also observed that the coupler length could be reduced significantly without affecting the coupling efficiency of the system.

  12. Monte Carlo simulation of tomography techniques using the platform Gate

    International Nuclear Information System (INIS)

    Barbouchi, Asma

    2007-01-01

    Simulations play a key role in functional imaging, with applications ranging from scanner design, scatter correction, protocol optimisation. GATE (Geant4 for Application Tomography Emission) is a platform for Monte Carlo Simulation. It is based on Geant4 to generate and track particles, to model geometry and physics process. Explicit modelling of time includes detector motion, time of flight, tracer kinetics. Interfaces to voxellised models and image reconstruction packages improve the integration of GATE in the global modelling cycle. In this work Monte Carlo simulations are used to understand and optimise the gamma camera's performances. We study the effect of the distance between source and collimator, the diameter of the holes and the thick of the collimator on the spatial resolution, energy resolution and efficiency of the gamma camera. We also study the reduction of simulation's time and implement a model of left ventricle in GATE. (Author). 7 refs

  13. Microwave transport in EBT distribution manifolds using Monte Carlo ray-tracing techniques

    International Nuclear Information System (INIS)

    Lillie, R.A.; White, T.L.; Gabriel, T.A.; Alsmiller, R.G. Jr.

    1983-01-01

    Ray tracing Monte Carlo calculations have been carried out using an existing Monte Carlo radiation transport code to obtain estimates of the microsave power exiting the torus coupling links in EPT microwave manifolds. The microwave power loss and polarization at surface reflections were accounted for by treating the microwaves as plane waves reflecting off plane surfaces. Agreement on the order of 10% was obtained between the measured and calculated output power distribution for an existing EBT-S toroidal manifold. A cost effective iterative procedure utilizing the Monte Carlo history data was implemented to predict design changes which could produce increased manifold efficiency and improved output power uniformity

  14. Optimized iteration in coupled Monte-Carlo - Thermal-hydraulics calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Dufek, J.

    2013-01-01

    This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration methods are also tested and it is concluded that the presented iteration method is near optimal. (authors)

  15. A midway forward-adjoint coupling method for neutron and photon Monte Carlo transport

    International Nuclear Information System (INIS)

    Serov, I.V.; John, T.M.; Hoogenboom, J.E.

    1999-01-01

    The midway Monte Carlo method for calculating detector responses combines a forward and an adjoint Monte Carlo calculation. In both calculations, particle scores are registered at a surface to be chosen by the user somewhere between the source and detector domains. The theory of the midway response determination is developed within the framework of transport theory for external sources and for criticality theory. The theory is also developed for photons, which are generated at inelastic scattering or capture of neutrons. In either the forward or the adjoint calculation a so-called black absorber technique can be applied; i.e., particles need not be followed after passing the midway surface. The midway Monte Carlo method is implemented in the general-purpose MCNP Monte Carlo code. The midway Monte Carlo method is demonstrated to be very efficient in problems with deep penetration, small source and detector domains, and complicated streaming paths. All the problems considered pose difficult variance reduction challenges. Calculations were performed using existing variance reduction methods of normal MCNP runs and using the midway method. The performed comparative analyses show that the midway method appears to be much more efficient than the standard techniques in an overwhelming majority of cases and can be recommended for use in many difficult variance reduction problems of neutral particle transport

  16. DOMINO, Coupling of Discrete Ordinate Program DOT with Monte-Carlo Program MORSE

    International Nuclear Information System (INIS)

    1974-01-01

    1 - Nature of physical problem solved: DOMINO is a general purpose code for coupling discrete ordinates and Monte Carlo radiation transport calculations. 2 - Method of solution: DOMINO transforms the angular flux as a function of energy group, mesh interval and discrete angle into current and subsequently into normalized probability distributions. 3 - Restrictions on the complexity of the problem: The discrete ordinates calculation is limited to an r-z geometry

  17. Two improved Monte Carlo photon cross section techniques

    International Nuclear Information System (INIS)

    Scudiere, M.B.

    1978-01-01

    Truncated series of Legendre coefficients and polynomials are often used in multigroup transport computer codes to describe group-to-group angular density transfer functions. Imposition of group structure on the energy continuum may create discontinuities in the first derivative of these functions. Because of the nature of these discontinuities efficient and accurate full-range polynomial expansions are not practically obtainable. Two separate and distinct methods for Monte Carlo photon transport are presented which eliminate essentially all major disadvantages of truncated expansions. In the first method, partial-range expansions are applied between the discontinuities. Here accurate low-order representations are obtained, which yield modest savings in computer charges. The second method employs unique properties of functions to replace them with a few smooth well-behaved representations. This method brings about a considerable savings in computer memory requirements. In addition, accuracy of the first method is maintained, while execution times are reduced even further

  18. STRONG CORRELATIONS AND ELECTRON-PHONON COUPLING IN HIGH-TEMPERATURE SUPERCONDUCTORS - A QUANTUM MONTE-CARLO STUDY

    NARCIS (Netherlands)

    MORGENSTERN, [No Value; FRICK, M; VONDERLINDEN, W

    We present quantum simulation studies for a system of strongly correlated fermions coupled to local anharmonic phonons. The Monte Carlo calculations are based on a generalized version of the Projector Quantum Monte Carlo Method allowing a simultaneous treatment of fermions and dynamical phonons. The

  19. Coupling of system thermal–hydraulics and Monte-Carlo code: Convergence criteria and quantification of correlation between statistical uncertainty and coupled error

    International Nuclear Information System (INIS)

    Wu, Xu; Kozlowski, Tomasz

    2015-01-01

    Highlights: • Coupling of Monte Carlo code Serpent and thermal–hydraulics code RELAP5. • A convergence criterion is developed based on the statistical uncertainty of power. • Correlation between MC statistical uncertainty and coupled error is quantified. • Both UO 2 and MOX single assembly models are used in the coupled simulation. • Validation of coupling results with a multi-group transport code DeCART. - Abstract: Coupled multi-physics approach plays an important role in improving computational accuracy. Compared with deterministic neutronics codes, Monte Carlo codes have the advantage of a higher resolution level. In the present paper, a three-dimensional continuous-energy Monte Carlo reactor physics burnup calculation code, Serpent, is coupled with a thermal–hydraulics safety analysis code, RELAP5. The coupled Serpent/RELAP5 code capability is demonstrated by the improved axial power distribution of UO 2 and MOX single assembly models, based on the OECD-NEA/NRC PWR MOX/UO 2 Core Transient Benchmark. Comparisons of calculation results using the coupled code with those from the deterministic methods, specifically heterogeneous multi-group transport code DeCART, show that the coupling produces more precise results. A new convergence criterion for the coupled simulation is developed based on the statistical uncertainty in power distribution in the Monte Carlo code, rather than ad-hoc criteria used in previous research. The new convergence criterion is shown to be more rigorous, equally convenient to use but requiring a few more coupling steps to converge. Finally, the influence of Monte Carlo statistical uncertainty on the coupled error of power and thermal–hydraulics parameters is quantified. The results are presented such that they can be used to find the statistical uncertainty to use in Monte Carlo in order to achieve a desired precision in coupled simulation

  20. Multi-Scale Coupling Between Monte Carlo Molecular Simulation and Darcy-Scale Flow in Porous Media

    KAUST Repository

    Saad, Ahmed Mohamed; Kadoura, Ahmad Salim; Sun, Shuyu

    2016-01-01

    In this work, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell centered finite difference method with non-uniform rectangular mesh were used to discretize the simulation

  1. Monte Carlo simulation techniques for predicting annual power production

    International Nuclear Information System (INIS)

    Cross, J.P.; Bulandr, P.J.

    1991-01-01

    As the owner and operator of a number of small to mid-sized hydroelectric sites, STS HydroPower has been faced with the need to accurately predict anticipated hydroelectric revenues over a period of years. The typical approach to this problem has been to look at each site from a mathematical deterministic perspective and evaluate the annual production from historic streamflows. Average annual production is simply taken to be the area under the flow duration curve defined by the operating and design characteristics of the selected turbines. Minimum annual production is taken to be a historic dry year scenario and maximum production is viewed as power generated under the most ideal of conditions. Such an approach creates two problems. First, in viewing the characteristics of a single site, it does not take into account the probability of such an event occurring. Second, in viewing all sites in a single organization's portfolio together, it does not reflect the varying flow conditions at the different sites. This paper attempts to address the first of these two concerns, that being the creation of a simulation model utilizing the Monte Carlo method at a single site. The result of the analysis is a picture of the production at the site that is both a better representation of anticipated conditions and defined probabilistically

  2. PRELIMINARY COUPLING OF THE MONTE CARLO CODE OPENMC AND THE MULTIPHYSICS OBJECT-ORIENTED SIMULATION ENVIRONMENT (MOOSE) FOR ANALYZING DOPPLER FEEDBACK IN MONTE CARLO SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Ellis; Derek Gaston; Benoit Forget; Kord Smith

    2011-07-01

    In recent years the use of Monte Carlo methods for modeling reactors has become feasible due to the increasing availability of massively parallel computer systems. One of the primary challenges yet to be fully resolved, however, is the efficient and accurate inclusion of multiphysics feedback in Monte Carlo simulations. The research in this paper presents a preliminary coupling of the open source Monte Carlo code OpenMC with the open source Multiphysics Object-Oriented Simulation Environment (MOOSE). The coupling of OpenMC and MOOSE will be used to investigate efficient and accurate numerical methods needed to include multiphysics feedback in Monte Carlo codes. An investigation into the sensitivity of Doppler feedback to fuel temperature approximations using a two dimensional 17x17 PWR fuel assembly is presented in this paper. The results show a functioning multiphysics coupling between OpenMC and MOOSE. The coupling utilizes Functional Expansion Tallies to accurately and efficiently transfer pin power distributions tallied in OpenMC to unstructured finite element meshes used in MOOSE. The two dimensional PWR fuel assembly case also demonstrates that for a simplified model the pin-by-pin doppler feedback can be adequately replicated by scaling a representative pin based on pin relative powers.

  3. Methodology of Continuous-Energy Adjoint Monte Carlo for Neutron, Photon, and Coupled Neutron-Photon Transport

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard

    2003-01-01

    Adjoint Monte Carlo may be a useful alternative to regular Monte Carlo calculations in cases where a small detector inhibits an efficient Monte Carlo calculation as only very few particle histories will cross the detector. However, in general purpose Monte Carlo codes, normally only the multigroup form of adjoint Monte Carlo is implemented. In this article the general methodology for continuous-energy adjoint Monte Carlo neutron transport is reviewed and extended for photon and coupled neutron-photon transport. In the latter cases the discrete photons generated by annihilation or by neutron capture or inelastic scattering prevent a direct application of the general methodology. Two successive reaction events must be combined in the selection process to accommodate the adjoint analog of a reaction resulting in a photon with a discrete energy. Numerical examples illustrate the application of the theory for some simplified problems

  4. Novel hybrid Monte Carlo/deterministic technique for shutdown dose rate analyses of fusion energy systems

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Peterson, Joshua L.; Grove, Robert E.

    2014-01-01

    Highlights: •Develop the novel Multi-Step CADIS (MS-CADIS) hybrid Monte Carlo/deterministic method for multi-step shielding analyses. •Accurately calculate shutdown dose rates using full-scale Monte Carlo models of fusion energy systems. •Demonstrate the dramatic efficiency improvement of the MS-CADIS method for the rigorous two step calculations of the shutdown dose rate in fusion reactors. -- Abstract: The rigorous 2-step (R2S) computational system uses three-dimensional Monte Carlo transport simulations to calculate the shutdown dose rate (SDDR) in fusion reactors. Accurate full-scale R2S calculations are impractical in fusion reactors because they require calculating space- and energy-dependent neutron fluxes everywhere inside the reactor. The use of global Monte Carlo variance reduction techniques was suggested for accelerating the R2S neutron transport calculation. However, the prohibitive computational costs of these approaches, which increase with the problem size and amount of shielding materials, inhibit their ability to accurately predict the SDDR in fusion energy systems using full-scale modeling of an entire fusion plant. This paper describes a novel hybrid Monte Carlo/deterministic methodology that uses the Consistent Adjoint Driven Importance Sampling (CADIS) method but focuses on multi-step shielding calculations. The Multi-Step CADIS (MS-CADIS) methodology speeds up the R2S neutron Monte Carlo calculation using an importance function that represents the neutron importance to the final SDDR. Using a simplified example, preliminary results showed that the use of MS-CADIS enhanced the efficiency of the neutron Monte Carlo simulation of an SDDR calculation by a factor of 550 compared to standard global variance reduction techniques, and that the efficiency enhancement compared to analog Monte Carlo is higher than a factor of 10,000

  5. Monitoring and preventing numerical oscillations in 3D simulations with coupled Monte Carlo codes

    International Nuclear Information System (INIS)

    Kotlyar, D.; Shwageraus, E.

    2014-01-01

    Highlights: • Conventional coupling methods used in all MC codes can be numerically unstable. • Application of new stochastic implicit (SIMP) methods may be required. • The implicit methods require additional computational effort. • Monitoring diagnostic of the numerical stability was developed here. • The procedure allows to create an hybrid explicit–implicit coupling scheme. - Abstract: Previous studies have reported that different schemes for coupling Monte Carlo (MC) neutron transport with burnup and thermal hydraulic feedbacks may potentially be numerically unstable. This issue can be resolved by application of implicit methods, such as the stochastic implicit mid-point (SIMP) methods. In order to assure numerical stability, the new methods do require additional computational effort. The instability issue however, is problem-dependent and does not necessarily occur in all cases. Therefore, blind application of the unconditionally stable coupling schemes, and thus incurring extra computational costs, may not always be necessary. In this paper, we attempt to develop an intelligent diagnostic mechanism, which will monitor numerical stability of the calculations and, if necessary, switch from simple and fast coupling scheme to more computationally expensive but unconditionally stable one. To illustrate this diagnostic mechanism, we performed a coupled burnup and TH analysis of a single BWR fuel assembly. The results indicate that the developed algorithm can be easily implemented in any MC based code for monitoring of numerical instabilities. The proposed monitoring method has negligible impact on the calculation time even for realistic 3D multi-region full core calculations

  6. Investigation of pattern recognition techniques for the indentification of splitting surfaces in Monte Carlo particle transport calculations

    International Nuclear Information System (INIS)

    Macdonald, J.L.

    1975-08-01

    Statistical and deterministic pattern recognition systems are designed to classify the state space of a Monte Carlo transport problem into importance regions. The surfaces separating the regions can be used for particle splitting and Russian roulette in state space in order to reduce the variance of the Monte Carlo tally. Computer experiments are performed to evaluate the performance of the technique using one and two dimensional Monte Carlo problems. Additional experiments are performed to determine the sensitivity of the technique to various pattern recognition and Monte Carlo problem dependent parameters. A system for applying the technique to a general purpose Monte Carlo code is described. An estimate of the computer time required by the technique is made in order to determine its effectiveness as a variance reduction device. It is recommended that the technique be further investigated in a general purpose Monte Carlo code. (auth)

  7. Particle Communication and Domain Neighbor Coupling: Scalable Domain Decomposed Algorithms for Monte Carlo Particle Transport

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M. J.; Brantley, P. S.

    2015-01-20

    In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 221 = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domain’s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.

  8. A Monte Carlo technique for signal level detection in implanted intracranial pressure monitoring.

    Science.gov (United States)

    Avent, R K; Charlton, J D; Nagle, H T; Johnson, R N

    1987-01-01

    Statistical monitoring techniques like CUSUM, Trigg's tracking signal and EMP filtering have a major advantage over more recent techniques, such as Kalman filtering, because of their inherent simplicity. In many biomedical applications, such as electronic implantable devices, these simpler techniques have greater utility because of the reduced requirements on power, logic complexity and sampling speed. The determination of signal means using some of the earlier techniques are reviewed in this paper, and a new Monte Carlo based method with greater capability to sparsely sample a waveform and obtain an accurate mean value is presented. This technique may find widespread use as a trend detection method when reduced power consumption is a requirement.

  9. Application of artificial intelligence techniques to the acceleration of Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Maconald, J.L.; Cashwell, E.D.

    1978-09-01

    The techniques of learning theory and pattern recognition are used to learn splitting surface locations for the Monte Carlo neutron transport code MCN. A study is performed to determine default values for several pattern recognition and learning parameters. The modified MCN code is used to reduce computer cost for several nontrivial example problems

  10. Markov chain Monte Carlo techniques applied to parton distribution functions determination: Proof of concept

    Science.gov (United States)

    Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane

    2017-07-01

    We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.

  11. Propagation of nuclear data uncertainties in fuel cycle calculations using Monte-Carlo technique

    International Nuclear Information System (INIS)

    Diez, C.J.; Cabellos, O.; Martinez, J.S.

    2011-01-01

    Nowadays, the knowledge of uncertainty propagation in depletion calculations is a critical issue because of the safety and economical performance of fuel cycles. Response magnitudes such as decay heat, radiotoxicity and isotopic inventory and their uncertainties should be known to handle spent fuel in present fuel cycles (e.g. high burnup fuel programme) and furthermore in new fuel cycles designs (e.g. fast breeder reactors and ADS). To deal with this task, there are different error propagation techniques, deterministic (adjoint/forward sensitivity analysis) and stochastic (Monte-Carlo technique) to evaluate the error in response magnitudes due to nuclear data uncertainties. In our previous works, cross-section uncertainties were propagated using a Monte-Carlo technique to calculate the uncertainty of response magnitudes such as decay heat and neutron emission. Also, the propagation of decay data, fission yield and cross-section uncertainties was performed, but only isotopic composition was the response magnitude calculated. Following the previous technique, the nuclear data uncertainties are taken into account and propagated to response magnitudes, decay heat and radiotoxicity. These uncertainties are assessed during cooling time. To evaluate this Monte-Carlo technique, two different applications are performed. First, a fission pulse decay heat calculation is carried out to check the Monte-Carlo technique, using decay data and fission yields uncertainties. Then, the results, experimental data and reference calculation (JEFF Report20), are compared. Second, we assess the impact of basic nuclear data (activation cross-section, decay data and fission yields) uncertainties on relevant fuel cycle parameters (decay heat and radiotoxicity) for a conceptual design of a modular European Facility for Industrial Transmutation (EFIT) fuel cycle. After identifying which time steps have higher uncertainties, an assessment of which uncertainties have more relevance is performed

  12. Monte Carlo climate change forecasts with a global coupled ocean-atmosphere model

    International Nuclear Information System (INIS)

    Cubasch, U.; Santer, B.D.; Hegerl, G.; Hoeck, H.; Maier-Reimer, E.; Mikolajwicz, U.; Stoessel, A.; Voss, R.

    1992-01-01

    The Monte Carlo approach, which has increasingly been used during the last decade in the field of extended range weather forecasting, has been applied for climate change experiments. Four integrations with a global coupled ocean-atmosphere model have been started from different initial conditions, but with the same greenhouse gas forcing according to the IPCC scenario A. All experiments have been run for a period of 50 years. The results indicate that the time evolution of the global mean warming depends strongly on the initial state of the climate system. It can vary between 6 and 31 years. The Monte Carlo approach delivers information about both the mean response and the statistical significance of the response. While the individual members of the ensemble show a considerable variation in the climate change pattern of temperature after 50 years, the ensemble mean climate change pattern closely resembles the pattern obtained in a 100 year integration and is, at least over most of the land areas, statistically significant. The ensemble averaged sea-level change due to thermal expansion is significant in the global mean and locally over wide regions of the Pacific. The hydrological cycle is also significantly enhanced in the global mean, but locally the changes in precipitation and soil moisture are masked by the variability of the experiments. (orig.)

  13. A perturbation-based susbtep method for coupled depletion Monte-Carlo codes

    International Nuclear Information System (INIS)

    Kotlyar, Dan; Aufiero, Manuele; Shwageraus, Eugene; Fratoni, Massimiliano

    2017-01-01

    Highlights: • The GPT method allows to calculate the sensitivity coefficients to any perturbation. • Full Jacobian of sensitivities, cross sections (XS) to concentrations, may be obtained. • The time dependent XS is obtained by combining the GPT and substep methods. • The proposed GPT substep method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. - Abstract: Coupled Monte Carlo (MC) methods are becoming widely used in reactor physics analysis and design. Many research groups therefore, developed their own coupled MC depletion codes. Typically, in such coupled code systems, neutron fluxes and cross sections are provided to the depletion module by solving a static neutron transport problem. These fluxes and cross sections are representative only of a specific time-point. In reality however, both quantities would change through the depletion time interval. Recently, Generalized Perturbation Theory (GPT) equivalent method that relies on collision history approach was implemented in Serpent MC code. This method was used here to calculate the sensitivity of each nuclide and reaction cross section due to the change in concentration of every isotope in the system. The coupling method proposed in this study also uses the substep approach, which incorporates these sensitivity coefficients to account for temporal changes in cross sections. As a result, a notable improvement in time dependent cross section behavior was obtained. The method was implemented in a wrapper script that couples Serpent with an external depletion solver. The performance of this method was compared with other existing methods. The results indicate that the proposed method requires substantially less MC transport solutions to achieve the same accuracy.

  14. Sub-step methodology for coupled Monte Carlo depletion and thermal hydraulic codes

    International Nuclear Information System (INIS)

    Kotlyar, D.; Shwageraus, E.

    2016-01-01

    Highlights: • Discretization of time in coupled MC codes determines the results’ accuracy. • The error is due to lack of information regarding the time-dependent reaction rates. • The proposed sub-step method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. • The reaction rates are varied as functions of nuclide densities and TH conditions. - Abstract: The governing procedure in coupled Monte Carlo (MC) codes relies on discretization of the simulation time into time steps. Typically, the MC transport solution at discrete points will generate reaction rates, which in most codes are assumed to be constant within the time step. This assumption can trigger numerical instabilities or result in a loss of accuracy, which, in turn, would require reducing the time steps size. This paper focuses on reducing the time discretization error without requiring additional MC transport solutions and hence with no major computational overhead. The sub-step method presented here accounts for the reaction rate variation due to the variation in nuclide densities and thermal hydraulic (TH) conditions. This is achieved by performing additional depletion and TH calculations within the analyzed time step. The method was implemented in BGCore code and subsequently used to analyze a series of test cases. The results indicate that computational speedup of up to a factor of 10 may be achieved over the existing coupling schemes.

  15. Monte Carlo calculations of the optical coupling between bismuth germanate crystals and photomultiplier tubes

    International Nuclear Information System (INIS)

    Derenzo, S.E.; Riles, J.K.

    1981-10-01

    The high density and atomic number of bismuth germanate (Bi 4 Ge 3 O 12 or BGO) make it a very useful detector for positron emission tomography. Modern tomograph designs use large numbers of small, closely-packed crystals for high spatial resolution and high sensitivity. However, the low light output, the high refractive index (n=2.15), and the need for accurate timing make it important to optimize the transfer of light to the photomultiplier tube (PMT). We describe the results of a Monte Carlo computer program developed to study the effect of crystal shape, reflector type, and the refractive index of the PMT window on coupling efficiency. The program simulates total internal, external, and Fresnel reflection as well as internal absorption and scattering by bubbles

  16. Estimation of the impact of manufacturing tolerances on burn-up calculations using Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bock, M.; Wagner, M. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH, Garching (Germany). Forschungszentrum

    2012-11-01

    In recent years, the availability of computing resources has increased enormously. There are two ways to take advantage of this increase in analyses in the field of the nuclear fuel cycle, such as burn-up calculations or criticality safety calculations. The first possible way is to improve the accuracy of the models that are analyzed. For burn-up calculations this means, that the goal to model and to calculate the burn-up of a full reactor core is getting more and more into reach. The second way to utilize the resources is to run state-of-the-art programs with simplified models several times, but with varied input parameters. This second way opens the applicability of the assessment of uncertainties and sensitivities based on the Monte Carlo method for fields of research that rely heavily on either high CPU usage or high memory consumption. In the context of the nuclear fuel cycle, applications that belong to these types of demanding analyses are again burn-up and criticality safety calculations. The assessment of uncertainties in burn-up analyses can complement traditional analysis techniques such as best estimate or bounding case analyses and can support the safety analysis in future design decisions, e.g. by analyzing the uncertainty of the decay heat power of the nuclear inventory stored in the spent fuel pool of a nuclear power plant. This contribution concentrates on the uncertainty analysis in burn-up calculations of PWR fuel assemblies. The uncertainties in the results arise from the variation of the input parameters. In this case, the focus is on the one hand on the variation of manufacturing tolerances that are present in the different production stages of the fuel assemblies. On the other hand, uncertainties that describe the conditions during the reactor operation are taken into account. They also affect the results of burn-up calculations. In order to perform uncertainty analyses in burn-up calculations, GRS has improved the capabilities of its general

  17. Monte Carlo and discrete-ordinate simulations of irradiances in the coupled atmosphere-ocean system.

    Science.gov (United States)

    Gjerstad, Karl Idar; Stamnes, Jakob J; Hamre, Børge; Lotsberg, Jon K; Yan, Banghua; Stamnes, Knut

    2003-05-20

    We compare Monte Carlo (MC) and discrete-ordinate radiative-transfer (DISORT) simulations of irradiances in a one-dimensional coupled atmosphere-ocean (CAO) system consisting of horizontal plane-parallel layers. The two models have precisely the same physical basis, including coupling between the atmosphere and the ocean, and we use precisely the same atmospheric and oceanic input parameters for both codes. For a plane atmosphere-ocean interface we find agreement between irradiances obtained with the two codes to within 1%, both in the atmosphere and the ocean. Our tests cover case 1 water, scattering by density fluctuations both in the atmosphere and in the ocean, and scattering by particulate matter represented by a one-parameter Henyey-Greenstein (HG) scattering phase function. The CAO-MC code has an advantage over the CAO-DISORT code in that it can handle surface waves on the atmosphere-ocean interface, but the CAO-DISORT code is computationally much faster. Therefore we use CAO-MC simulations to study the influence of ocean surface waves and propose a way to correct the results of the CAO-DISORT code so as to obtain fast and accurate underwater irradiances in the presence of surface waves.

  18. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics

    International Nuclear Information System (INIS)

    Seker, V.; Thomas, J. W.; Downar, T. J.

    2007-01-01

    The interest in high fidelity modeling of nuclear reactor cores has increased over the last few years and has become computationally more feasible because of the dramatic improvements in processor speed and the availability of low cost parallel platforms. In the research here high fidelity, multi-physics analyses was performed by solving the neutron transport equation using Monte Carlo methods and by solving the thermal-hydraulics equations using computational fluid dynamics. A computation tool based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR' along with the verification and validation efforts. McSTAR is written in PERL programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STAR-CD for every region. Three different methods were investigated and two of them are implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. The necessary input file manipulation, data file generation, normalization and multi-processor calculation settings are all done through the program flow in McSTAR. Initial testing of the code was performed using a single pin cell and a 3X3 PWR pin-cell problem. The preliminary results of the single pin-cell problem are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code De

  19. Experience with Kicker Beam Coupling Reduction Techniques

    CERN Document Server

    Gaxiola, Enrique; Caspers, Friedhelm; Ducimetière, Laurent; Kroyer, Tom

    2005-01-01

    SPS beam impedance is still one of the worries for operation with nominal LHC beam over longer periods, once the final configuration will be installed in 2006. Several CERN SPS kickers suffer from significant beam induced ferrite heating. In specific cases, for instance beam scrubbing, the temperature of certain ferrite yokes went beyond the Curie point. Several retrofit impedance reduction techniques have been investigated theoretically and with practical tests. We report on experience gained during the 2004 SPS operation with resistively coated ceramic inserts in terms of kicker heating, pulse rise time, operating voltage, and vacuum behaviour. For another technique using interleaved metallic stripes we observed significant improvements in bench measurements. Advantages and drawbacks of both methods and potential combinations of them are discussed and simulation as well as measured data are shown. Prospects for further improvements beyond 2006 are briefly outlined.

  20. Description of a neutron field perturbed by a probe using coupled Monte Carlo and discrete ordinates radiation transport calculations

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1984-01-01

    This work concerns calculation of a neutron response, caused by a neutron field perturbed by materials surrounding the source or the detector. Solution of a problem is obtained using coupling of the Monte Carlo radiation transport computation for the perturbed region and the discrete ordinates transport computation for the unperturbed system. (author). 62 refs

  1. Hybrid method coupling molecular dynamics and Monte Carlo simulations to study the properties of gases in microchannels and nanochannels

    NARCIS (Netherlands)

    Nedea, S.V.; Frijns, A.J.H.; Steenhoven, van A.A.; Markvoort, Albert. J.; Hilbers, P.A.J.

    2005-01-01

    We combine molecular dynamics (MD) and Monte Carlo (MC) simulations to study the properties of gas molecules confined between two hard walls of a microchannel or nanochannel. The coupling between MD and MC simulations is introduced by performing MD near the boundaries for accuracy and MC in the bulk

  2. Coupled electron-ion Monte Carlo simulation of hydrogen molecular crystals

    Science.gov (United States)

    Rillo, Giovanni; Morales, Miguel A.; Ceperley, David M.; Pierleoni, Carlo

    2018-03-01

    We performed simulations for solid molecular hydrogen at high pressures (250 GPa ≤ P ≤ 500 GPa) along two isotherms at T = 200 K (phase III) and at T = 414 K (phase IV). At T = 200 K, we considered likely candidates for phase III, the C2c and Cmca12 structures, while at T = 414 K in phase IV, we studied the Pc48 structure. We employed both Coupled Electron-Ion Monte Carlo (CEIMC) and Path Integral Molecular Dynamics (PIMD). The latter is based on Density Functional Theory (DFT) with the van der Waals approximation (vdW-DF). The comparison between the two methods allows us to address the question of the accuracy of the exchange-correlation approximation of DFT for thermal and quantum protons without recurring to perturbation theories. In general, we find that atomic and molecular fluctuations in PIMD are larger than in CEIMC which suggests that the potential energy surface from vdW-DF is less structured than the one from quantum Monte Carlo. We find qualitatively different behaviors for systems prepared in the C2c structure for increasing pressure. Within PIMD, the C2c structure is dynamically partially stable for P ≤ 250 GPa only: it retains the symmetry of the molecular centers but not the molecular orientation; at intermediate pressures, it develops layered structures like Pbcn or Ibam and transforms to the metallic Cmca-4 structure at P ≥ 450 GPa. Instead, within CEIMC, the C2c structure is found to be dynamically stable at least up to 450 GPa; at increasing pressure, the molecular bond length increases and the nuclear correlation decreases. For the other two structures, the two methods are in qualitative agreement although quantitative differences remain. We discuss various structural properties and the electrical conductivity. We find that these structures become conducting around 350 GPa but the metallic Drude-like behavior is reached only at around 500 GPa, consistent with recent experimental claims.

  3. Timesaving techniques for decision of electron-molecule collisions in Monte Carlo simulation of electrical discharges

    International Nuclear Information System (INIS)

    Sugawara, Hirotake; Mori, Naoki; Sakai, Yosuke; Suda, Yoshiyuki

    2007-01-01

    Techniques to reduce the computational load for determination of electron-molecule collisions in Monte Carlo simulations of electrical discharges have been presented. By enhancing the detection efficiency of the no-collision case in the decision scheme of the collisional events, we can decrease the frequency of access to time-consuming subroutines to calculate the electron collision cross sections of the gas molecules for obtaining the collision probability. A benchmark test and an estimation to evaluate the present techniques have shown a practical timesaving efficiency

  4. Fastening, coupling and joining technique between diaspora and irredenta

    Science.gov (United States)

    Bauer, C.-O.

    1980-06-01

    The problem of eliminating the present divergence and shattering (diaspora) in the treatment of problems of the fastening, coupling, and joining technique on different technical branches is examined. It is shown that by an appropriate independence the fastening, coupling and joining techniques can recognize and consequently utilize the numerous performance reserves which are concealed by the present organization and action due to the lack of systematically tended works.

  5. Validation of variance reduction techniques in Mediso (SPIRIT DH-V) SPECT system by Monte Carlo

    International Nuclear Information System (INIS)

    Rodriguez Marrero, J. P.; Diaz Garcia, A.; Gomez Facenda, A.

    2015-01-01

    Monte Carlo simulation of nuclear medical imaging systems is a widely used method for reproducing their operation in a real clinical environment, There are several Single Photon Emission Tomography (SPECT) systems in Cuba. For this reason it is clearly necessary to introduce a reliable and fast simulation platform in order to obtain consistent image data. This data will reproduce the original measurements conditions. In order to fulfill these requirements Monte Carlo platform GAMOS (Geant4 Medicine Oriented Architecture for Applications) have been used. Due to the very size and complex configuration of parallel hole collimators in real clinical SPECT systems, Monte Carlo simulation usually consumes excessively high time and computing resources. main goal of the present work is to optimize the efficiency of calculation by means of new GAMOS functionality. There were developed and validated two GAMOS variance reduction techniques to speed up calculations. These procedures focus and limit transport of gamma quanta inside the collimator. The obtained results were asses experimentally in Mediso (SPIRIT DH-V) SPECT system. Main quality control parameters, such as sensitivity and spatial resolution were determined. Differences of 4.6% sensitivity and 8.7% spatial resolution were reported against manufacturer values. Simulation time was decreased up to 650 times. Using these techniques it was possible to perform several studies in almost 8 hours each. (Author)

  6. Neutron therapy coupling brachytherapy and boron neutron capture therapy (BNCT) techniques

    International Nuclear Information System (INIS)

    Chaves, Iara Ferreira.

    1994-12-01

    In the present dissertation, neutron radiation techniques applied into organs of the human body are investigated as oncologic radiation therapy. The proposal treatment consists on connecting two distinct techniques: Boron Neutron Capture Therapy (BNCT) and irradiation by discrete sources of neutrons, through the brachytherapy conception. Biological and radio-dosimetrical aspects of the two techniques are considered. Nuclear aspects are discussed, presenting the nuclear reactions occurred in tumoral region, and describing the forms of evaluating the dose curves. Methods for estimating radiation transmission are reviewed through the solution of the neutron transport equation, Monte Carlo methodology, and simplified analytical calculation based on diffusion equation and numerical integration. The last is computational developed and presented as a quickly way to neutron transport evaluation in homogeneous medium. The computational evaluation of the doses for distinct hypothetical situations is presented, applying the coupled techniques BNTC and brachytherapy as an possible oncologic treatment. (author). 78 refs., 61 figs., 21 tabs

  7. An efficient interpolation technique for jump proposals in reversible-jump Markov chain Monte Carlo calculations

    Science.gov (United States)

    Farr, W. M.; Mandel, I.; Stevens, D.

    2015-01-01

    Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580

  8. Perturbative expansions from Monte Carlo simulations at weak coupling: Wilson loops and the static-quark self-energy

    Science.gov (United States)

    Trottier, H. D.; Shakespeare, N. H.; Lepage, G. P.; MacKenzie, P. B.

    2002-05-01

    Perturbative coefficients for Wilson loops and the static-quark self-energy are extracted from Monte Carlo simulations at weak coupling. The lattice volumes and couplings are chosen to ensure that the lattice momenta are all perturbative. Twisted boundary conditions are used to eliminate the effects of lattice zero modes and to suppress nonperturbative finite-volume effects due to Z(3) phases. Simulations of the Wilson gluon action are done with both periodic and twisted boundary conditions, and over a wide range of lattice volumes (from 34 to 164) and couplings (from β~9 to β~60). A high precision comparison is made between the simulation data and results from finite-volume lattice perturbation theory. The Monte Carlo results are shown to be in excellent agreement with perturbation theory through second order. New results for third-order coefficients for a number of Wilson loops and the static-quark self-energy are reported.

  9. The electron transport problem sampling by Monte Carlo individual collision technique

    International Nuclear Information System (INIS)

    Androsenko, P.A.; Belousov, V.I.

    2005-01-01

    The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)

  10. The electron transport problem sampling by Monte Carlo individual collision technique

    Energy Technology Data Exchange (ETDEWEB)

    Androsenko, P.A.; Belousov, V.I. [Obninsk State Technical Univ. of Nuclear Power Engineering, Kaluga region (Russian Federation)

    2005-07-01

    The problem of electron transport is of most interest in all fields of the modern science. To solve this problem the Monte Carlo sampling has to be used. The electron transport is characterized by a large number of individual interactions. To simulate electron transport the 'condensed history' technique may be used where a large number of collisions are grouped into a single step to be sampled randomly. Another kind of Monte Carlo sampling is the individual collision technique. In comparison with condensed history technique researcher has the incontestable advantages. For example one does not need to give parameters altered by condensed history technique like upper limit for electron energy, resolution, number of sub-steps etc. Also the condensed history technique may lose some very important tracks of electrons because of its limited nature by step parameters of particle movement and due to weakness of algorithms for example energy indexing algorithm. There are no these disadvantages in the individual collision technique. This report presents some sampling algorithms of new version BRAND code where above mentioned technique is used. All information on electrons was taken from Endf-6 files. They are the important part of BRAND. These files have not been processed but directly taken from electron information source. Four kinds of interaction like the elastic interaction, the Bremsstrahlung, the atomic excitation and the atomic electro-ionization were considered. In this report some results of sampling are presented after comparison with analogs. For example the endovascular radiotherapy problem (P2) of QUADOS2002 was presented in comparison with another techniques that are usually used. (authors)

  11. ITS, TIGER System of Coupled Electron Photon Transport by Monte-Carlo

    International Nuclear Information System (INIS)

    Halbleib, J.A.; Mehlhorn, T.A.; Young, M.F.

    1996-01-01

    1 - Description of program or function: ITS permits a state-of-the-art Monte Carlo solution of linear time-integrated coupled electron/ photon radiation transport problems with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. 2 - Method of solution: Through a machine-portable utility that emulates the basic features of the CDC UPDATE processor, the user selects one of eight codes for running on a machine of one of four (at least) major vendors. With the ITS-3.0 release the PSR-0245/UPEML package is included to perform these functions. The ease with which this utility is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is maximized by employing the best available cross sections and sampling distributions, and the most complete physical model for describing the production and transport of the electron/ photon cascade from 1.0 GeV down to 1.0 keV. Flexibility of construction permits the codes to be tailored to specific applications and the capabilities of the codes to be extended to more complex applications through update procedures. 3 - Restrictions on the complexity of the problem: - Restrictions and/or limitations for ITS depend upon the local operating system

  12. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  13. The development of depletion program coupled with Monte Carlo computer code

    International Nuclear Information System (INIS)

    Nguyen Kien Cuong; Huynh Ton Nghiem; Vuong Huu Tan

    2015-01-01

    The paper presents the development of depletion code for light water reactor coupled with MCNP5 code called the MCDL code (Monte Carlo Depletion for Light Water Reactor). The first order differential depletion system equations of 21 actinide isotopes and 50 fission product isotopes are solved by the Radau IIA Implicit Runge Kutta (IRK) method after receiving neutron flux, reaction rates in one group energy and multiplication factors for fuel pin, fuel assembly or whole reactor core from the calculation results of the MCNP5 code. The calculation for beryllium poisoning and cooling time is also integrated in the code. To verify and validate the MCDL code, high enriched uranium (HEU) and low enriched uranium (LEU) fuel assemblies VVR-M2 types and 89 fresh HEU fuel assemblies, 92 LEU fresh fuel assemblies cores of the Dalat Nuclear Research Reactor (DNRR) have been investigated and compared with the results calculated by the SRAC code and the MCNP R EBUS linkage system code. The results show good agreement between calculated data of the MCDL code and reference codes. (author)

  14. Practical adjoint Monte Carlo technique for fixed-source and eigenfunction neutron transport problems

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    1981-01-01

    An adjoint Monte Carlo technique is described for the solution of neutron transport problems. The optimum biasing function for a zero-variance collision estimator is derived. The optimum treatment of an analog of a non-velocity thermal group has also been derived. The method is extended to multiplying systems, especially for eigenfunction problems to enable the estimate of averages over the unknown fundamental neutron flux distribution. A versatile computer code, FOCUS, has been written, based on the described theory. Numerical examples are given for a shielding problem and a critical assembly, illustrating the performance of the FOCUS code. 19 refs

  15. Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO

    International Nuclear Information System (INIS)

    Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.

    1974-01-01

    The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.)

  16. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...... and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Student-t distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where...

  17. PELE:  Protein Energy Landscape Exploration. A Novel Monte Carlo Based Technique.

    Science.gov (United States)

    Borrelli, Kenneth W; Vitalis, Andreas; Alcantara, Raul; Guallar, Victor

    2005-11-01

    Combining protein structure prediction algorithms and Metropolis Monte Carlo techniques, we provide a novel method to explore all-atom energy landscapes. The core of the technique is based on a steered localized perturbation followed by side-chain sampling as well as minimization cycles. The algorithm and its application to ligand diffusion are presented here. Ligand exit pathways are successfully modeled for different systems containing ligands of various sizes:  carbon monoxide in myoglobin, camphor in cytochrome P450cam, and palmitic acid in the intestinal fatty-acid-binding protein. These initial applications reveal the potential of this new technique in mapping millisecond-time-scale processes. The computational cost associated with the exploration is significantly less than that of conventional MD simulations.

  18. A Newton-based Jacobian-free approach for neutronic-Monte Carlo/thermal-hydraulic static coupled analysis

    International Nuclear Information System (INIS)

    Mylonakis, Antonios G.; Varvayanni, M.; Catsaros, N.

    2017-01-01

    Highlights: •A Newton-based Jacobian-free Monte Carlo/thermal-hydraulic coupling approach is introduced. •OpenMC is coupled with COBRA-EN with a Newton-based approach. •The introduced coupling approach is tested in numerical experiments. •The performance of the new approach is compared with the traditional “serial” coupling approach. -- Abstract: In the field of nuclear reactor analysis, multi-physics calculations that account for the bonded nature of the neutronic and thermal-hydraulic phenomena are of major importance for both reactor safety and design. So far in the context of Monte-Carlo neutronic analysis a kind of “serial” algorithm has been mainly used for coupling with thermal-hydraulics. The main motivation of this work is the interest for an algorithm that could maintain the distinct treatment of the involved fields within a tight coupling context that could be translated into higher convergence rates and more stable behaviour. This work investigates the possibility of replacing the usually used “serial” iteration with an approximate Newton algorithm. The selected algorithm, called Approximate Block Newton, is actually a version of the Jacobian-free Newton Krylov method suitably modified for coupling mono-disciplinary solvers. Within this Newton scheme the linearised system is solved with a Krylov solver in order to avoid the creation of the Jacobian matrix. A coupling algorithm between Monte-Carlo neutronics and thermal-hydraulics based on the above-mentioned methodology is developed and its performance is analysed. More specifically, OpenMC, a Monte-Carlo neutronics code and COBRA-EN, a thermal-hydraulics code for sub-channel and core analysis, are merged in a coupling scheme using the Approximate Block Newton method aiming to examine the performance of this scheme and compare with that of the “traditional” serial iterative scheme. First results show a clear improvement of the convergence especially in problems where significant

  19. Determination of true coincidence correction factors using Monte-Carlo simulation techniques

    Directory of Open Access Journals (Sweden)

    Chionis Dionysios A.

    2014-01-01

    Full Text Available Aim of this work is the numerical calculation of the true coincidence correction factors by means of Monte-Carlo simulation techniques. For this purpose, the Monte Carlo computer code PENELOPE was used and the main program PENMAIN was properly modified in order to include the effect of the true coincidence phenomenon. The modified main program that takes into consideration the true coincidence phenomenon was used for the full energy peak efficiency determination of an XtRa Ge detector with relative efficiency 104% and the results obtained for the 1173 keV and 1332 keV photons of 60Co were found consistent with respective experimental ones. The true coincidence correction factors were calculated as the ratio of the full energy peak efficiencies was determined from the original main program PENMAIN and the modified main program PENMAIN. The developed technique was applied for 57Co, 88Y, and 134Cs and for two source-to-detector geometries. The results obtained were compared with true coincidence correction factors calculated from the "TrueCoinc" program and the relative bias was found to be less than 2%, 4%, and 8% for 57Co, 88Y, and 134Cs, respectively.

  20. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1988-01-01

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  1. Alpha particle density and energy distributions in tandem mirrors using Monte-Carlo techniques

    International Nuclear Information System (INIS)

    Kerns, J.A.

    1986-05-01

    We have simulated the alpha thermalization process using a Monte-Carlo technique, in which the alpha guiding center is followed between simulated collisions and Spitzer's collision model is used for the alpha-plasma interaction. Monte-Carlo techniques are used to determine the alpha radial birth position, the alpha particle position at a collision, and the angle scatter and dispersion at a collision. The plasma is modeled as a hot reacting core, surrounded by a cold halo plasma (T approx.50 eV). Alpha orbits that intersect the halo lose 90% of their energy to the halo electrons because of the halo drag, which is ten times greater than the drag in the core. The uneven drag across the alpha orbit also produces an outward, radial, guiding center drift. This drag drift is dependent on the plasma density and temperature radial profiles. We have modeled these profiles and have specifically studied a single-scale-length model, in which the density scale length (r/sub pD/) equals the temperature scale length (r/sub pT/), and a two-scale-length model, in which r/sub pD//r/sub pT/ = 1.1

  2. ITS - The integrated TIGER series of coupled electron/photon Monte Carlo transport codes

    International Nuclear Information System (INIS)

    Halbleib, J.A.; Mehlhorn, T.A.

    1985-01-01

    The TIGER series of time-independent coupled electron/photon Monte Carlo transport codes is a group of multimaterial, multidimensional codes designed to provide a state-of-the-art description of the production and transport of the electron/photon cascade. The codes follow both electrons and photons from 1.0 GeV down to 1.0 keV, and the user has the option of combining the collisional transport with transport in macroscopic electric and magnetic fields of arbitrary spatial dependence. Source particles can be either electrons or photons. The most important output data are (a) charge and energy deposition profiles, (b) integral and differential escape coefficients for both electrons and photons, (c) differential electron and photon flux, and (d) pulse-height distributions for selected regions of the problem geometry. The base codes of the series differ from one another primarily in their dimensionality and geometric modeling. They include (a) a one-dimensional multilayer code, (b) a code that describes the transport in two-dimensional axisymmetric cylindrical material geometries with a fully three-dimensional description of particle trajectories, and (c) a general three-dimensional transport code which employs a combinatorial geometry scheme. These base codes were designed primarily for describing radiation transport for those situations in which the detailed atomic structure of the transport medium is not important. For some applications, it is desirable to have a more detailed model of the low energy transport. The system includes three additional codes that contain a more elaborate ionization/relaxation model than the base codes. Finally, the system includes two codes that combine the collisional transport of the multidimensional base codes with transport in macroscopic electric and magnetic fields of arbitrary spatial dependence

  3. Biases and statistical errors in Monte Carlo burnup calculations: an unbiased stochastic scheme to solve Boltzmann/Bateman coupled equations

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C.M.

    2011-01-01

    External linking scripts between Monte Carlo transport codes and burnup codes, and complete integration of burnup capability into Monte Carlo transport codes, have been or are currently being developed. Monte Carlo linked burnup methodologies may serve as an excellent benchmark for new deterministic burnup codes used for advanced systems; however, there are some instances where deterministic methodologies break down (i.e., heavily angularly biased systems containing exotic materials without proper group structure) and Monte Carlo burn up may serve as an actual design tool. Therefore, researchers are also developing these capabilities in order to examine complex, three-dimensional exotic material systems that do not contain benchmark data. Providing a reference scheme implies being able to associate statistical errors to any neutronic value of interest like k(eff), reaction rates, fluxes, etc. Usually in Monte Carlo, standard deviations are associated with a particular value by performing different independent and identical simulations (also referred to as 'cycles', 'batches', or 'replicas'), but this is only valid if the calculation itself is not biased. And, as will be shown in this paper, there is a bias in the methodology that consists of coupling transport and depletion codes because Bateman equations are not linear functions of the fluxes or of the reaction rates (those quantities being always measured with an uncertainty). Therefore, we have to quantify and correct this bias. This will be achieved by deriving an unbiased minimum variance estimator of a matrix exponential function of a normal mean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. Numerical tests will be performed with an ad hoc Monte Carlo code on a very simple depletion case and will be compared to the theoretical results obtained with the reference scheme. Finally, the statistical error propagation

  4. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  5. Population synthesis of radio and gamma-ray millisecond pulsars using Markov Chain Monte Carlo techniques

    Science.gov (United States)

    Gonthier, Peter L.; Koh, Yew-Meng; Kust Harding, Alice

    2016-04-01

    We present preliminary results of a new population synthesis of millisecond pulsars (MSP) from the Galactic disk using Markov Chain Monte Carlo techniques to better understand the model parameter space. We include empirical radio and gamma-ray luminosity models that are dependent on the pulsar period and period derivative with freely varying exponents. The magnitudes of the model luminosities are adjusted to reproduce the number of MSPs detected by a group of thirteen radio surveys as well as the MSP birth rate in the Galaxy and the number of MSPs detected by Fermi. We explore various high-energy emission geometries like the slot gap, outer gap, two pole caustic and pair starved polar cap models. The parameters associated with the birth distributions for the mass accretion rate, magnetic field, and period distributions are well constrained. With the set of four free parameters, we employ Markov Chain Monte Carlo simulations to explore the model parameter space. We present preliminary comparisons of the simulated and detected distributions of radio and gamma-ray pulsar characteristics. We estimate the contribution of MSPs to the diffuse gamma-ray background with a special focus on the Galactic Center.We express our gratitude for the generous support of the National Science Foundation (RUI: AST-1009731), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program (NNX09AQ71G).

  6. Gating Techniques for Rao-Blackwellized Monte Carlo Data Association Filter

    Directory of Open Access Journals (Sweden)

    Yazhao Wang

    2014-01-01

    Full Text Available This paper studies the Rao-Blackwellized Monte Carlo data association (RBMCDA filter for multiple target tracking. The elliptical gating strategies are redesigned and incorporated into the framework of the RBMCDA filter. The obvious benefit is the reduction of the time cost because the data association procedure can be carried out with less validated measurements. In addition, the overlapped parts of the neighboring validation regions are divided into several separated subregions according to the possible origins of the validated measurements. In these subregions, the measurement uncertainties can be taken into account more reasonably than those of the simple elliptical gate. This would help to achieve higher tracking ability of the RBMCDA algorithm by a better association prior approximation. Simulation results are provided to show the effectiveness of the proposed gating techniques.

  7. TART 2000: A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Cullen, D.E

    2000-01-01

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files

  8. TART 2000 A Coupled Neutron-Photon, 3-D, Combinatorial Geometry, Time Dependent, Monte Carlo Transport Code

    CERN Document Server

    Cullen, D

    2000-01-01

    TART2000 is a coupled neutron-photon, 3 Dimensional, combinatorial geometry, time dependent Monte Carlo radiation transport code. This code can run on any modern computer. It is a complete system to assist you with input Preparation, running Monte Carlo calculations, and analysis of output results. TART2000 is also incredibly FAST; if you have used similar codes, you will be amazed at how fast this code is compared to other similar codes. Use of the entire system can save you a great deal of time and energy. TART2000 is distributed on CD. This CD contains on-line documentation for all codes included in the system, the codes configured to run on a variety of computers, and many example problems that you can use to familiarize yourself with the system. TART2000 completely supersedes all older versions of TART, and it is strongly recommended that users only use the most recent version of TART2000 and its data files.

  9. Improvement of the symbolic Monte-Carlo method for the transport equation: P1 extension and coupling with diffusion

    International Nuclear Information System (INIS)

    Clouet, J.F.; Samba, G.

    2005-01-01

    We use asymptotic analysis to study the diffusion limit of the Symbolic Implicit Monte-Carlo (SIMC) method for the transport equation. For standard SIMC with piecewise constant basis functions, we demonstrate mathematically that the solution converges to the solution of a wrong diffusion equation. Nevertheless a simple extension to piecewise linear basis functions enables to obtain the correct solution. This improvement allows the calculation in opaque medium on a mesh resolving the diffusion scale much larger than the transport scale. Anyway, the huge number of particles which is necessary to get a correct answer makes this computation time consuming. Thus, we have derived from this asymptotic study an hybrid method coupling deterministic calculation in the opaque medium and Monte-Carlo calculation in the transparent medium. This method gives exactly the same results as the previous one but at a much lower price. We present numerical examples which illustrate the analysis. (authors)

  10. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu [Department of Physics and Astronomy, University of British Columbia, Vancouver V5Z 1L8 (Canada); Celler, Anna [Department of Radiology, University of British Columbia, Vancouver V5Z 1L8 (Canada)

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90

  11. Steady-State Electrodiffusion from the Nernst-Planck Equation Coupled to Local Equilibrium Monte Carlo Simulations.

    Science.gov (United States)

    Boda, Dezső; Gillespie, Dirk

    2012-03-13

    We propose a procedure to compute the steady-state transport of charged particles based on the Nernst-Planck (NP) equation of electrodiffusion. To close the NP equation and to establish a relation between the concentration and electrochemical potential profiles, we introduce the Local Equilibrium Monte Carlo (LEMC) method. In this method, Grand Canonical Monte Carlo simulations are performed using the electrochemical potential specified for the distinct volume elements. An iteration procedure that self-consistently solves the NP and flux continuity equations with LEMC is shown to converge quickly. This NP+LEMC technique can be used in systems with diffusion of charged or uncharged particles in complex three-dimensional geometries, including systems with low concentrations and small applied voltages that are difficult for other particle simulation techniques.

  12. Planar and SPECT Monte Carlo acceleration using a variance reduction technique in I131imaging

    International Nuclear Information System (INIS)

    Khosravi, H. R.; Sarkar, S.; Takavar, A.; Saghari, M.; Shahriari, M.

    2007-01-01

    Various variance reduction techniques such as forced detection (FD) have been implemented in Monte Carlo (MC) simulation of nuclear medicine in an effort to decrease the simulation time while keeping accuracy. However most of these techniques still result in very long MC simulation times for being implemented into routine use. Materials and Methods: Convolution-based forced detection (CFD) method as a variance reduction technique was implemented into the well known SlMlND MC photon simulation software. A variety of simulations including point and extended sources in uniform and non-uniform attenuation media, were performed to compare differences between FD and CFD versions of SlMlND modeling for I 131 radionuclide and camera configurations. Experimental measurement of system response function was compared to FD and CFD simulation data. Results: Different simulations using the CFD method agree very well with experimental measurements as well as FD version. CFD simulations of system response function and larger sources in uniform and non-uniform attenuated phantoms also agree well with FD version of SIMIND. Conclusion: CFD has been modeled into the SlMlND MC program and validated. With the current implementation of CFD, simulation times were approximately 10-15 times shorter with similar accuracy and image quality compared with FD MC

  13. Verification of the Monte Carlo differential operator technique for MCNP trademark

    International Nuclear Information System (INIS)

    McKinney, G.W.; Iverson, J.L.

    1996-02-01

    The differential operator perturbation technique has been incorporated into the Monte Carlo N-Particle transport code MCNP and will become a standard feature of future releases. This feature includes first and second order terms of the Taylor series expansion for response perturbations related to cross-section data (i.e., density, composition, etc.). Perturbation and sensitivity analyses can benefit from this technique in that predicted changes in one or more tally responses may be obtained for multiple perturbations in a single run. The user interface is intuitive, yet flexible enough to allow for changes in a specific microscopic cross section over a specified energy range. With this technique, a precise estimate of a small change in response is easily obtained, even when the standard deviation of the unperturbed tally is greater than the change. Furthermore, results presented in this report demonstrate that first and second order terms can offer acceptable accuracy, to within a few percent, for up to 20-30% changes in a response

  14. Brazing techniques for side-coupled electron accelerator structures

    International Nuclear Information System (INIS)

    Hansborough, L.D.; Clark, W.L.; DePaula, R.A.; Martinez, F.A.; Roybal, P.L.; Wilkerson, L.C.; Young, L.M.

    1986-01-01

    The collaboration between the Los Alamos National Laboratory and the National Bureau of Standards (NBS), started in 1979, has led to the development of an advanced c-w microtron accelerator design. The four 2380-MHz NBS accelerating structures, containing a total of 184 accelerating cavities, have been fabricated and delivered. New fabrication methods, coupled with refinements of hydrogen-furnace brazing techniques described in this paper, allow efficient production of side-coupled structures. Success with the NBS RTM led to Los Alamos efforts on similar 2450-MHz accelerators for the microtron accelerator operated by the Nuclear Physics Department of the University of Illinois. Two accelerators (each with 17 cavities) have been fabricated; in 1986, a 45-cavity accelerator is being fabricated by private industry with some assistance from Los Alamos. Further private industry experience and refinement of the described fabrication techniques may allow future accelerators of this type to be completely fabricated by private industry

  15. Perturbative expansions from Monte Carlo simulations at weak coupling: Wilson loops and the static-quark self-energy

    International Nuclear Information System (INIS)

    Trottier, H.D.; Shakespeare, N.H.; Lepage, G.P.; Mackenzie, P.B.

    2002-01-01

    Perturbative coefficients for Wilson loops and the static-quark self-energy are extracted from Monte Carlo simulations at weak coupling. The lattice volumes and couplings are chosen to ensure that the lattice momenta are all perturbative. Twisted boundary conditions are used to eliminate the effects of lattice zero modes and to suppress nonperturbative finite-volume effects due to Z(3) phases. Simulations of the Wilson gluon action are done with both periodic and twisted boundary conditions, and over a wide range of lattice volumes (from 3 4 to 16 4 ) and couplings (from β≅9 to β≅60). A high precision comparison is made between the simulation data and results from finite-volume lattice perturbation theory. The Monte Carlo results are shown to be in excellent agreement with perturbation theory through second order. New results for third-order coefficients for a number of Wilson loops and the static-quark self-energy are reported

  16. Multigroup and coupled forward-adjoint Monte Carlo calculation efficiencies for secondary neutron doses from proton beams

    International Nuclear Information System (INIS)

    Kelsey IV, Charles T.; Prinja, Anil K.

    2011-01-01

    We evaluate the Monte Carlo calculation efficiency for multigroup transport relative to continuous energy transport using the MCNPX code system to evaluate secondary neutron doses from a proton beam. We consider both fully forward simulation and application of a midway forward adjoint coupling method to the problem. Previously we developed tools for building coupled multigroup proton/neutron cross section libraries and showed consistent results for continuous energy and multigroup proton/neutron transport calculations. We observed that forward multigroup transport could be more efficient than continuous energy. Here we quantify solution efficiency differences for a secondary radiation dose problem characteristic of proton beam therapy problems. We begin by comparing figures of merit for forward multigroup and continuous energy MCNPX transport and find that multigroup is 30 times more efficient. Next we evaluate efficiency gains for coupling out-of-beam adjoint solutions with forward in-beam solutions. We use a variation of a midway forward-adjoint coupling method developed by others for neutral particle transport. Our implementation makes use of the surface source feature in MCNPX and we use spherical harmonic expansions for coupling in angle rather than solid angle binning. The adjoint out-of-beam transport for organs of concern in a phantom or patient can be coupled with numerous forward, continuous energy or multigroup, in-beam perturbations of a therapy beam line configuration. Out-of-beam dose solutions are provided without repeating out-of-beam transport. (author)

  17. Monte Carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles. Final Report

    International Nuclear Information System (INIS)

    Paul P.H. Wilson

    2005-01-01

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods

  18. EDITORIAL: International Workshop on Monte Carlo Techniques in Radiotherapy Delivery and Verification

    Science.gov (United States)

    Verhaegen, Frank; Seuntjens, Jan

    2008-03-01

    Monte Carlo particle transport techniques offer exciting tools for radiotherapy research, where they play an increasingly important role. Topics of research related to clinical applications range from treatment planning, motion and registration studies, brachytherapy, verification imaging and dosimetry. The International Workshop on Monte Carlo Techniques in Radiotherapy Delivery and Verification took place in a hotel in Montreal in French Canada, from 29 May-1 June 2007, and was the third workshop to be held on a related topic, which now seems to have become a tri-annual event. About one hundred workers from many different countries participated in the four-day meeting. Seventeen experts in the field were invited to review topics and present their latest work. About half of the audience was made up by young graduate students. In a very full program, 57 papers were presented and 10 posters were on display during most of the meeting. On the evening of the third day a boat trip around the island of Montreal allowed participants to enjoy the city views, and to sample the local cuisine. The topics covered at the workshop included the latest developments in the most popular Monte Carlo transport algorithms, fast Monte Carlo, statistical issues, source modeling, MC treatment planning, modeling of imaging devices for treatment verification, registration and deformation of images and a sizeable number of contributions on brachytherapy. In this volume you will find 27 short papers resulting from the workshop on a variety of topics, some of them on very new stuff such as graphics processing units for fast computing, PET modeling, dual-energy CT, calculations in dynamic phantoms, tomotherapy devices, . . . . We acknowledge the financial support of the National Cancer Institute of Canada, the Institute of Cancer Research of the Canadian Institutes of Health Research, the Association Québécoise des Physicien(ne)s Médicaux Clinique, the Institute of Physics, and Medical

  19. ITS Version 3.0: The Integrated TIGER Series of coupled electron/photon Monte Carlo transport codes

    International Nuclear Information System (INIS)

    Halbleib, J.A.; Kensek, R.P.; Valdez, G.D.; Mehlhorn, T.A.; Seltzer, S.M.; Berger, M.J.

    1993-01-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. It combines operational simplicity and physical accuracy in order to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Flexibility of construction permits tailoring of the codes to specific applications and extension of code capabilities to more complex applications through simple update procedures

  20. ITS Version 3.0: The Integrated TIGER Series of coupled electron/photon Monte Carlo transport codes

    Energy Technology Data Exchange (ETDEWEB)

    Halbleib, J.A.; Kensek, R.P.; Valdez, G.D.; Mehlhorn, T.A. [Sandia National Labs., Albuquerque, NM (United States); Seltzer, S.M.; Berger, M.J. [National Inst. of Standards and Technology, Gaithersburg, MD (United States). Ionizing Radiation Div.

    1993-06-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields. It combines operational simplicity and physical accuracy in order to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Flexibility of construction permits tailoring of the codes to specific applications and extension of code capabilities to more complex applications through simple update procedures.

  1. Applications Of Monte Carlo Radiation Transport Simulation Techniques For Predicting Single Event Effects In Microelectronics

    International Nuclear Information System (INIS)

    Warren, Kevin; Reed, Robert; Weller, Robert; Mendenhall, Marcus; Sierawski, Brian; Schrimpf, Ronald

    2011-01-01

    MRED (Monte Carlo Radiative Energy Deposition) is Vanderbilt University's Geant4 application for simulating radiation events in semiconductors. Geant4 is comprised of the best available computational physics models for the transport of radiation through matter. In addition to basic radiation transport physics contained in the Geant4 core, MRED has the capability to track energy loss in tetrahedral geometric objects, includes a cross section biasing and track weighting technique for variance reduction, and additional features relevant to semiconductor device applications. The crucial element of predicting Single Event Upset (SEU) parameters using radiation transport software is the creation of a dosimetry model that accurately approximates the net collected charge at transistor contacts as a function of deposited energy. The dosimetry technique described here is the multiple sensitive volume (MSV) model. It is shown to be a reasonable approximation of the charge collection process and its parameters can be calibrated to experimental measurements of SEU cross sections. The MSV model, within the framework of MRED, is examined for heavy ion and high-energy proton SEU measurements of a static random access memory.

  2. Coupling of kinetic Monte Carlo simulations of surface reactions to transport in a fluid for heterogeneous catalytic reactor modeling

    International Nuclear Information System (INIS)

    Schaefer, C.; Jansen, A. P. J.

    2013-01-01

    We have developed a method to couple kinetic Monte Carlo simulations of surface reactions at a molecular scale to transport equations at a macroscopic scale. This method is applicable to steady state reactors. We use a finite difference upwinding scheme and a gap-tooth scheme to efficiently use a limited amount of kinetic Monte Carlo simulations. In general the stochastic kinetic Monte Carlo results do not obey mass conservation so that unphysical accumulation of mass could occur in the reactor. We have developed a method to perform mass balance corrections that is based on a stoichiometry matrix and a least-squares problem that is reduced to a non-singular set of linear equations that is applicable to any surface catalyzed reaction. The implementation of these methods is validated by comparing numerical results of a reactor simulation with a unimolecular reaction to an analytical solution. Furthermore, the method is applied to two reaction mechanisms. The first is the ZGB model for CO oxidation in which inevitable poisoning of the catalyst limits the performance of the reactor. The second is a model for the oxidation of NO on a Pt(111) surface, which becomes active due to lateral interaction at high coverages of oxygen. This reaction model is based on ab initio density functional theory calculations from literature.

  3. Techniques for heavy-ion coupled-channels calculations. I. Long-range Coulomb coupling

    International Nuclear Information System (INIS)

    Rhoades-Brown, M.; Macfarlane, M.H.; Pieper, S.C.

    1980-01-01

    Direct-reaction calculations for heavy ions require special computational techniques that take advantage of the physical peculiarities of heavy-ion systems. This paper is the first of a series on quantum-mechanical coupled-channels calculations for heavy ions. It deals with the problems posed by the long range of the Coulomb coupling interaction. Our approach is to use the Alder-Pauli factorization whereby the channel wave functions are expressed as products of Coulomb functions and modulating amplitudes. The equations for the modulating amplitudes are used to integrate inwards from infinity to a nuclear matching radius ( approx. = 20 fm). To adequate accuracy, the equations for the amplitudes can be reduced to first order and solved in first Born approximation. The use of the Born approximation leads to rapid recursion relations for the solutions of the Alder-Pauli equations and hence to a great reduction in computational labor. The resulting coupled-channels Coulomb functions can then be matched in the usual way to solutions of the coupled radial equations in the interior region of r space. Numerical studies demonstrate the reliability of the various techniques introduced

  4. On Micro VAX farms and shower libraries: Monte Carlo techniques developed for the D0 detector

    International Nuclear Information System (INIS)

    Raja, R.

    1988-01-01

    In order to predict correctly the effects of cracks and dead material in a nearly hermetic calorimeter, hadronic and electromagnetic showers need to be simulated accurately on a particle by particle basis. Tracking all the particles of all showers in the calorimeter leads to very large CPU times (typically 5 hours on a VAX780) for events at √(s) = 2TeV. Parametrizing the energy deposition of electromagnetic particles in showers with energy below 200 MeV results in event times of the order of 1 hour on a VAX780. This is still unacceptably large. The D0 collaboration then employed a farm of 16 MicroVax II's to get acceptable throughputs. The calorimeter hit patterns of each individual track was output, to be summed up by a later job. These individual hit patterns were entered into a random access shower library file, which was then used for subsequent Monte Carlo simulations. This shower library technique results in further speed-ups of a factor of 60 without degrading the quality of simulation significantly

  5. Reliability study of a prestressed concrete beam by Monte Carlo techniques

    International Nuclear Information System (INIS)

    Floris, C.; Migliacci, A.

    1987-01-01

    The safety of a prestressed beam is studied at the third probabilistic level and so calculating the probability of failure (P f ) under known loads. Since the beam is simply supported and subject only to loads perpendicular to its axis, only bending and shear loads are present. Since the ratio between the span and the clear height is over 20 with thus a very considerable shear span, it can be assumed that failure occurs entirely due to the bending moment, with shear having no effect. In order to calculate P f the probability density function (p.d.f.) have to be known both for the stress moment and the resisting moment. Attention here is focused on the construction of the latter. It is shown that it is practically impossible to find the required function analytically. On the other hand, numerical simulation with the help of a computer is particularly convenient. The so-called Monte Carlo techniques were chosen: they are based on the extraction of random numbers and are thus very suitable for simulating random events and quantities. (orig./HP)

  6. Assessment of calibration parameters for an aerial gamma spectrometry system using Monte-Carlo technique

    International Nuclear Information System (INIS)

    Srinivasan, P.; Raman, Anand; Sharma, D.N.

    2009-01-01

    Aerial gamma spectrometry is a very effective method for quickly surveying a large area, which might get contaminated following a nuclear accident, or due to nuclear weapon fallout. The technique not only helps in identifying the contaminating radionuclide but also in assessing the magnitude and the extent of contamination. These two factors are of importance for the authorities to quickly plan and execute effective counter measures and controls if required. The development of Airborne gamma ray spectrometry systems have been reported by different institutions. The application of these systems have been reported by different authors. Radiation Safety Systems Division of the Bhabha Atomic Research Centre has developed an Aerial Gamma Spectrometry System (AGSS) and the surveying methodology. For an online assessment of the contamination levels, it is essential to calibrate the system (AGSS) either flying it over a known contaminated area or over a simulated contaminated surface by deploying sealed sources on the ground. AGSS has been calibrated for different detectors in aerial exercises using such simulated contamination on the ground. The calibration methodology essentially needs net photo-peak counts in selected energy windows to finally arrive at the Air to Ground Correlation Factors at selected flight parameters such as altitude, speed of flight and the time interval at which each spectrum is acquired. This paper describes the methodology to predict all the necessary parameters like photon fluence at various altitudes, the photo-peak counts in different energy windows, Air to Ground Correlation Factors(AGCF), the dose rate at any height due to air scattered gamma ray photons etc. These parameters are predicted for a given source deployment matrix, detector and altitude of flying using the Monte-Carlo code MCNP (Monte Carlo Neutron and Photon Transport Code.CCC-200, RSIC, ORNL, Tennessee, USA). A methodology to generate the completely folded gamma ray count

  7. Technical Note: On the efficiency of variance reduction techniques for Monte Carlo estimates of imaging noise.

    Science.gov (United States)

    Sharma, Diksha; Sempau, Josep; Badano, Aldo

    2018-02-01

    Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative

  8. SRNA-2K5, Proton Transport Using 3-D by Monte Carlo Techniques

    International Nuclear Information System (INIS)

    Ilic, Radovan D.

    2005-01-01

    1 - Description of program or function: SRNA-2K5 performs Monte Carlo transport simulation of proton in 3D source and 3D geometry of arbitrary materials. The proton transport based on condensed history model, and on model of compound nuclei decays that creates in nonelastic nuclear interaction by proton absorption. 2 - Methods: The SRNA-2K5 package is developed for time independent simulation of proton transport by Monte Carlo techniques for numerical experiments in complex geometry, using PENGEOM from PENELOPE with different material compositions, and arbitrary spectrum of proton generated from the 3D source. This package developed for 3D proton dose distribution in proton therapy and dosimetry, and it was based on the theory of multiple scattering. The compound nuclei decay was simulated by our and Russian MSDM models using ICRU 49 and ICRU 63 data. If protons trajectory is divided on great number of steps, protons passage can be simulated according to Berger's Condensed Random Walk model. Conditions of angular distribution and fluctuation of energy loss determinate step length. Physical picture of these processes is described by stopping power, Moliere's angular distribution, Vavilov's distribution with Sulek's correction per all electron orbits, and Chadwick's cross sections for nonelastic nuclear interactions, obtained by his GNASH code. According to physical picture of protons passage and with probabilities of protons transition from previous to next stage, which is prepared by SRNADAT program, simulation of protons transport in all SRNA codes runs according to usual Monte Carlo scheme: (i) proton from the spectrum prepared for random choice of energy, position and space angle is emitted from the source; (ii) proton is loosing average energy on the step; (iii) on that step, proton experience a great number of collisions, and it changes direction of movement randomly chosen from angular distribution; (iv) random fluctuation is added to average energy loss; (v

  9. A review of Monte Carlo techniques used in various fields of radiation protection

    International Nuclear Information System (INIS)

    Koblinger, L.

    1987-06-01

    Monte Carlo methods and their utilization in radiation protection are overviewed. Basic principles and the most frequently used sampling methods are described. Examples range from the simulation of the random walk of photons and neutrons to neutron spectrum unfolding. (author)

  10. Monte Carlo Modeling of Dual and Triple Photon Energy Absorptiometry Technique

    Directory of Open Access Journals (Sweden)

    Alireza Kamali-Asl

    2007-12-01

    Full Text Available Introduction: Osteoporosis is a bone disease in which there is a reduction in the amount of bone mineral content leading to an increase in the risk of bone fractures. The affected individuals not only have to go through lots of pain and suffering but this disease also results in high economic costs to the society due to a large number of fractures.  A timely and accurate diagnosis of this disease makes it possible to start a treatment and thus preventing bone fractures as a result of osteoporosis. Radiographic methods are particularly well suited for in vivo determination of bone mineral density (BMD due to the relatively high x-ray absorption properties of bone mineral compared to other tissues. Materials and Methods: Monte Carlo simulation has been conducted to explore the possibilities of triple photon energy absorptiometry (TPA in the measurement of bone mineral content. The purpose of this technique is to correctly measure the bone mineral density in the presence of fatty and soft tissues. The same simulations have been done for a dual photon energy absorptiometry (DPA system and an extended DPA system. Results: Using DPA with three components improves the accuracy of the obtained result while the simulation results show that TPA system is not accurate enough to be considered as an adequate method for the measurement of bone mineral density. Discussion: The reason for the improvement in the accuracy is the consideration of fatty tissue in TPA method while having attenuation coefficient as a function of energy makes TPA an inadequate method. Conclusion: Using TPA method is not a perfect solution to overcome the problem of non uniformity in the distribution of fatty tissue.

  11. Application of an efficient materials perturbation technique to Monte Carlo photon transport calculations in borehole logging

    International Nuclear Information System (INIS)

    Picton, D.J.; Harris, R.G.; Randle, K.; Weaver, D.R.

    1995-01-01

    This paper describes a simple, accurate and efficient technique for the calculation of materials perturbation effects in Monte Carlo photon transport calculations. It is particularly suited to the application for which it was developed, namely the modelling of a dual detector density tool as used in borehole logging. However, the method would be appropriate to any photon transport calculation in the energy range 0.1 to 2 MeV, in which the predominant processes are Compton scattering and photoelectric absorption. The method enables a single set of particle histories to provide results for an array of configurations in which material densities or compositions vary. It can calculate the effects of small perturbations very accurately, but is by no means restricted to such cases. For the borehole logging application described here the method has been found to be efficient for a moderate range of variation in the bulk density (of the order of ±30% from a reference value) or even larger changes to a limited portion of the system (e.g. a low density mudcake of the order of a few tens of mm in thickness). The effective speed enhancement over an equivalent set of individual calculations is in the region of an order of magnitude or more. Examples of calculations on a dual detector density tool are given. It is demonstrated that the method predicts, to a high degree of accuracy, the variation of detector count rates with formation density, and that good results are also obtained for the effects of mudcake layers. An interesting feature of the results is that relative count rates (the ratios of count rates obtained with different configurations) can usually be determined more accurately than the absolute values of the count rates. (orig.)

  12. Kinetic Monte Carlo modeling of chemical reactions coupled with heat transfer.

    Science.gov (United States)

    Castonguay, Thomas C; Wang, Feng

    2008-03-28

    In this paper, we describe two types of effective events for describing heat transfer in a kinetic Monte Carlo (KMC) simulation that may involve stochastic chemical reactions. Simulations employing these events are referred to as KMC-TBT and KMC-PHE. In KMC-TBT, heat transfer is modeled as the stochastic transfer of "thermal bits" between adjacent grid points. In KMC-PHE, heat transfer is modeled by integrating the Poisson heat equation for a short time. Either approach is capable of capturing the time dependent system behavior exactly. Both KMC-PHE and KMC-TBT are validated by simulating pure heat transfer in a rod and a square and modeling a heated desorption problem where exact numerical results are available. KMC-PHE is much faster than KMC-TBT and is used to study the endothermic desorption of a lattice gas. Interesting findings from this study are reported.

  13. Dynamic Monte Carlo rate constants for magnetic Hamiltonians coupled to a phonon bath

    Science.gov (United States)

    Solomon, Lazarus; Novotny, Mark

    2007-03-01

    For quantitative comparisons between experimental time- dependent measurements and dynamic Monte Carlo simulations, a relation between the time constant in the simulation and real time is necessary. We calculate the transition rate for spin S system using the lattice frame method for a rigid spin cluster in an elastic medium [1]. We compare this with the transition rate for an Ising spin 12 system using the quantum- mechanical density-matrix method [2] with the results of ref [1,3]. These transition probabilities are different from those of either the Glauber or the Metropolis dynamics, and reflect the properties of the bosonic bath. Comparison with recent experiments [4] will be discussed. [1] E. M. Chudnovsky, D. A. Garanin, and R. Schilling (PRB 72, 2006) [2] K. Park, M. A. Novotny, and P. A. Rikvold (PRE 66, 2002) [3] K Saito, S. Takesue, and S. Miyashita, (PRE 61, 2002) [4] T. Meunier et al (Condensed Matter, 2006)

  14. Probabilistic approach of resource assessment in Kerinci geothermal field using numerical simulation coupling with monte carlo simulation

    Science.gov (United States)

    Hidayat, Iki; Sutopo; Pratama, Heru Berian

    2017-12-01

    The Kerinci geothermal field is one phase liquid reservoir system in the Kerinci District, western part of Jambi Province. In this field, there are geothermal prospects that identified by the heat source up flow inside a National Park area. Kerinci field was planned to develop 1×55 MWe by Pertamina Geothermal Energy. To define reservoir characterization, the numerical simulation of Kerinci field is developed by using TOUGH2 software with information from conceptual model. The pressure and temperature profile well data of KRC-B1 are validated with simulation data to reach natural state condition. The result of the validation is suitable matching. Based on natural state simulation, the resource assessment of Kerinci geothermal field is estimated by using Monte Carlo simulation with the result P10-P50-P90 are 49.4 MW, 64.3 MW and 82.4 MW respectively. This paper is the first study of resource assessment that has been estimated successfully in Kerinci Geothermal Field using numerical simulation coupling with Monte carlo simulation.

  15. Comparison of discrete ordinate and Monte Carlo simulations of polarized radiative transfer in two coupled slabs with different refractive indices.

    Science.gov (United States)

    Cohen, D; Stamnes, S; Tanikawa, T; Sommersten, E R; Stamnes, J J; Lotsberg, J K; Stamnes, K

    2013-04-22

    A comparison is presented of two different methods for polarized radiative transfer in coupled media consisting of two adjacent slabs with different refractive indices, each slab being a stratified medium with no change in optical properties except in the direction of stratification. One of the methods is based on solving the integro-differential radiative transfer equation for the two coupled slabs using the discrete ordinate approximation. The other method is based on probabilistic and statistical concepts and simulates the propagation of polarized light using the Monte Carlo approach. The emphasis is on non-Rayleigh scattering for particles in the Mie regime. Comparisons with benchmark results available for a slab with constant refractive index show that both methods reproduce these benchmark results when the refractive index is set to be the same in the two slabs. Computed results for test cases with coupling (different refractive indices in the two slabs) show that the two methods produce essentially identical results for identical input in terms of absorption and scattering coefficients and scattering phase matrices.

  16. Multi-Scale Coupling Between Monte Carlo Molecular Simulation and Darcy-Scale Flow in Porous Media

    KAUST Repository

    Saad, Ahmed Mohamed

    2016-06-01

    In this work, an efficient coupling between Monte Carlo (MC) molecular simulation and Darcy-scale flow in porous media is presented. The cell centered finite difference method with non-uniform rectangular mesh were used to discretize the simulation domain and solve the governing equations. To speed up the MC simulations, we implemented a recently developed scheme that quickly generates MC Markov chains out of pre-computed ones, based on the reweighting and reconstruction algorithm. This method astonishingly reduces the required computational times by MC simulations from hours to seconds. To demonstrate the strength of the proposed coupling in terms of computational time efficiency and numerical accuracy in fluid properties, various numerical experiments covering different compressible single-phase flow scenarios were conducted. The novelty in the introduced scheme is in allowing an efficient coupling of the molecular scale and the Darcy\\'s one in reservoir simulators. This leads to an accurate description of thermodynamic behavior of the simulated reservoir fluids; consequently enhancing the confidence in the flow predictions in porous media.

  17. FIFRELIN - TRIPOLI-4® coupling for Monte Carlo simulations with a fission model. Application to shielding calculations

    Science.gov (United States)

    Petit, Odile; Jouanne, Cédric; Litaize, Olivier; Serot, Olivier; Chebboubi, Abdelhazize; Pénéliau, Yannick

    2017-09-01

    TRIPOLI-4® Monte Carlo transport code and FIFRELIN fission model have been coupled by means of external files so that neutron transport can take into account fission distributions (multiplicities and spectra) that are not averaged, as is the case when using evaluated nuclear data libraries. Spectral effects on responses in shielding configurations with fission sampling are then expected. In the present paper, the principle of this coupling is detailed and a comparison between TRIPOLI-4® fission distributions at the emission of fission neutrons is presented when using JEFF-3.1.1 evaluated data or FIFRELIN data generated either through a n/g-uncoupled mode or through a n/g-coupled mode. Finally, an application to a modified version of the ASPIS benchmark is performed and the impact of using FIFRELIN data on neutron transport is analyzed. Differences noticed on average reaction rates on the surfaces closest to the fission source are mainly due to the average prompt fission spectrum. Moreover, when working with the same average spectrum, a complementary analysis based on non-average reaction rates still shows significant differences that point out the real impact of using a fission model in neutron transport simulations.

  18. Exploiting neurovascular coupling: a Bayesian sequential Monte Carlo approach applied to simulated EEG fNIRS data

    Science.gov (United States)

    Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria

    2017-08-01

    Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.

  19. Efficiency of the delta-tracking technique for Monte Carlo calculations applied to neutron-transport simulations of the advanced Candu reactor design

    International Nuclear Information System (INIS)

    Arsenault, Benoit; Le Tellier, Romain; Hebert, Alain

    2008-01-01

    The paper presents the results of a first implementation of a Monte Carlo module in DRAGON Version 4 based on the delta-tracking technique. The Monte Carlo module uses the geometry and the self-shielded multigroup cross-sections calculated with a deterministic model. The module has been tested with three different configurations of an ACR TM -type lattice. The paper also discusses the impact of this approach on the efficiency of the Monte Carlo module. (authors)

  20. Imprecision of dose predictions for radionuclides released to the environment: an application of a Monte Carlo simulation technique

    Energy Technology Data Exchange (ETDEWEB)

    Schwarz, G; Hoffman, F O

    1980-01-01

    An evaluation of the imprecision in dose predictions for radionuclides has been performed using correct dose assessment models and knowledge of model parameter value uncertainties. The propagation of parameter uncertainties is demonstrated using a Monte Carlo technique for elemental iodine 131 transported via the pasture-cow-milk-child pathway. Results indicated that when site-specific information is unavailable, the imprecision inherent in the predictions for this pathway is potentially large. (3 graphs, 25 references, 5 tables)

  1. A Monte Carlo method and finite volume method coupled optical simulation method for parabolic trough solar collectors

    International Nuclear Information System (INIS)

    Liang, Hongbo; Fan, Man; You, Shijun; Zheng, Wandong; Zhang, Huan; Ye, Tianzhen; Zheng, Xuejing

    2017-01-01

    Highlights: •Four optical models for parabolic trough solar collectors were compared in detail. •Characteristics of Monte Carlo Method and Finite Volume Method were discussed. •A novel method was presented combining advantages of different models. •The method was suited to optical analysis of collectors with different geometries. •A new kind of cavity receiver was simulated depending on the novel method. -- Abstract: The PTC (parabolic trough solar collector) is widely used for space heating, heat-driven refrigeration, solar power, etc. The concentrated solar radiation is the only energy source for a PTC, thus its optical performance significantly affects the collector efficiency. In this study, four different optical models were constructed, validated and compared in detail. On this basis, a novel coupled method was presented by combining advantages of these models, which was suited to carry out a mass of optical simulations of collectors with different geometrical parameters rapidly and accurately. Based on these simulation results, the optimal configuration of a collector with highest efficiency can be determined. Thus, this method was useful for collector optimization and design. In the four models, MCM (Monte Carlo Method) and FVM (Finite Volume Method) were used to initialize photons distribution, as well as CPEM (Change Photon Energy Method) and MCM were adopted to describe the process of reflecting, transmitting and absorbing. For simulating reflection, transmission and absorption, CPEM was more efficient than MCM, so it was utilized in the coupled method. For photons distribution initialization, FVM saved running time and computation effort, whereas it needed suitable grid configuration. MCM only required a total number of rays for simulation, whereas it needed higher computing cost and its results fluctuated in multiple runs. In the novel coupled method, the grid configuration for FVM was optimized according to the “true values” from MCM of

  2. Monte Carlo analysis of a control technique for a tunable white lighting system

    DEFF Research Database (Denmark)

    Chakrabarti, Maumita; Thorseth, Anders; Jepsen, Jørgen

    2017-01-01

    A simulated colour control mechanism for a multi-coloured LED lighting system is presented. The system achieves adjustable and stable white light output and allows for system-to-system reproducibility after application of the control mechanism. The control unit works using a pre-calibrated lookup...... table for an experimentally realized system, with a calibrated tristimulus colour sensor. A Monte Carlo simulation is used to examine the system performance concerning the variation of luminous flux and chromaticity of the light output. The inputs to the Monte Carlo simulation, are variations of the LED...... peak wavelength, the LED rated luminous flux bin, the influence of the operating conditions, ambient temperature, driving current, and the spectral response of the colour sensor. The system performance is investigated by evaluating the outputs from the Monte Carlo simulation. The outputs show...

  3. Stability of nanocrystalline Ni-based alloys: coupling Monte Carlo and molecular dynamics simulations

    Science.gov (United States)

    Waseda, O.; Goldenstein, H.; Silva, G. F. B. Lenz e.; Neiva, A.; Chantrenne, P.; Morthomas, J.; Perez, M.; Becquart, C. S.; Veiga, R. G. A.

    2017-10-01

    The thermal stability of nanocrystalline Ni due to small additions of Mo or W (up to 1 at%) was investigated in computer simulations by means of a combined Monte Carlo (MC)/molecular dynamics (MD) two-steps approach. In the first step, energy-biased on-lattice MC revealed segregation of the alloying elements to grain boundaries. However, the condition for the thermodynamic stability of these nanocrystalline Ni alloys (zero grain boundary energy) was not fulfilled. Subsequently, MD simulations were carried out for up to 0.5 μs at 1000 K. At this temperature, grain growth was hindered for minimum global concentrations of 0.5 at% W and 0.7 at% Mo, thus preserving most of the nanocrystalline structure. This is in clear contrast to a pure Ni model system, for which the transformation into a monocrystal was observed in MD simulations within 0.2 μs at the same temperature. These results suggest that grain boundary segregation of low-soluble alloying elements in low-alloyed systems can produce high-temperature metastable nanocrystalline materials. MD simulations carried out at 1200 K for 1 at% Mo/W showed significant grain boundary migration accompanied by some degree of solute diffusion, thus providing additional evidence that solute drag mostly contributed to the nanostructure stability observed at lower temperature.

  4. Diagrammatic Monte Carlo for the weak-coupling expansion of non-Abelian lattice field theories: Large-N U (N ) ×U (N ) principal chiral model

    Science.gov (United States)

    Buividovich, P. V.; Davody, A.

    2017-12-01

    We develop numerical tools for diagrammatic Monte Carlo simulations of non-Abelian lattice field theories in the t'Hooft large-N limit based on the weak-coupling expansion. First, we note that the path integral measure of such theories contributes a bare mass term in the effective action which is proportional to the bare coupling constant. This mass term renders the perturbative expansion infrared-finite and allows us to study it directly in the large-N and infinite-volume limits using the diagrammatic Monte Carlo approach. On the exactly solvable example of a large-N O (N ) sigma model in D =2 dimensions we show that this infrared-finite weak-coupling expansion contains, in addition to powers of bare coupling, also powers of its logarithm, reminiscent of resummed perturbation theory in thermal field theory and resurgent trans-series without exponential terms. We numerically demonstrate the convergence of these double series to the manifestly nonperturbative dynamical mass gap. We then develop a diagrammatic Monte Carlo algorithm for sampling planar diagrams in the large-N matrix field theory, and apply it to study this infrared-finite weak-coupling expansion for large-N U (N ) ×U (N ) nonlinear sigma model (principal chiral model) in D =2 . We sample up to 12 leading orders of the weak-coupling expansion, which is the practical limit set by the increasingly strong sign problem at high orders. Comparing diagrammatic Monte Carlo with conventional Monte Carlo simulations extrapolated to infinite N , we find a good agreement for the energy density as well as for the critical temperature of the "deconfinement" transition. Finally, we comment on the applicability of our approach to planar QCD at zero and finite density.

  5. Validation of the coupling of mesh models to GEANT4 Monte Carlo code for simulation of internal sources of photons

    International Nuclear Information System (INIS)

    Caribe, Paulo Rauli Rafeson Vasconcelos; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil

    2013-01-01

    The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV

  6. Monte Carlo and discrete-ordinate simulations of spectral radiances in a coupled air-tissue system.

    Science.gov (United States)

    Hestenes, Kjersti; Nielsen, Kristian P; Zhao, Lu; Stamnes, Jakob J; Stamnes, Knut

    2007-04-20

    We perform a detailed comparison study of Monte Carlo (MC) simulations and discrete-ordinate radiative-transfer (DISORT) calculations of spectral radiances in a 1D coupled air-tissue (CAT) system consisting of horizontal plane-parallel layers. The MC and DISORT models have the same physical basis, including coupling between the air and the tissue, and we use the same air and tissue input parameters for both codes. We find excellent agreement between radiances obtained with the two codes, both above and in the tissue. Our tests cover typical optical properties of skin tissue at the 280, 540, and 650 nm wavelengths. The normalized volume scattering function for internal structures in the skin is represented by the one-parameter Henyey-Greenstein function for large particles and the Rayleigh scattering function for small particles. The CAT-DISORT code is found to be approximately 1000 times faster than the CAT-MC code. We also show that the spectral radiance field is strongly dependent on the inherent optical properties of the skin tissue.

  7. Heat Source Characterization In A TREAT Fuel Particle Using Coupled Neutronics Binary Collision Monte-Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Schunert, Sebastian; Schwen, Daniel; Ghassemi, Pedram; Baker, Benjamin; Zabriskie, Adam; Ortensi, Javier; Wang, Yaqi; Gleicher, Frederick; DeHart, Mark; Martineau, Richard

    2017-04-01

    This work presents a multi-physics, multi-scale approach to modeling the Transient Test Reactor (TREAT) currently prepared for restart at the Idaho National Laboratory. TREAT fuel is made up of microscopic fuel grains (r ˜ 20µm) dispersed in a graphite matrix. The novelty of this work is in coupling a binary collision Monte-Carlo (BCMC) model to the Finite Element based code Moose for solving a microsopic heat-conduction problem whose driving source is provided by the BCMC model tracking fission fragment energy deposition. This microscopic model is driven by a transient, engineering scale neutronics model coupled to an adiabatic heating model. The macroscopic model provides local power densities and neutron energy spectra to the microscpic model. Currently, no feedback from the microscopic to the macroscopic model is considered. TREAT transient 15 is used to exemplify the capabilities of the multi-physics, multi-scale model, and it is found that the average fuel grain temperature differs from the average graphite temperature by 80 K despite the low-power transient. The large temperature difference has strong implications on the Doppler feedback a potential LEU TREAT core would see, and it underpins the need for multi-physics, multi-scale modeling of a TREAT LEU core.

  8. Monte Carlo particle simulation and finite-element techniques for tandem mirror transport

    International Nuclear Information System (INIS)

    Rognlien, T.D.; Cohen, B.I.; Matsuda, Y.; Stewart, J.J. Jr.

    1987-01-01

    A description is given of numerical methods used in the study of axial transport in tandem mirrors owing to Coulomb collisions and rf diffusion. The methods are Monte Carlo particle simulations and direct solution to the Fokker-Planck equations by finite-element expansion. (author)

  9. Using Monte Carlo Techniques to Demonstrate the Meaning and Implications of Multicollinearity

    Science.gov (United States)

    Vaughan, Timothy S.; Berry, Kelly E.

    2005-01-01

    This article presents an in-class Monte Carlo demonstration, designed to demonstrate to students the implications of multicollinearity in a multiple regression study. In the demonstration, students already familiar with multiple regression concepts are presented with a scenario in which the "true" relationship between the response and…

  10. Implementation of variance-reduction techniques for Monte Carlo nuclear logging calculations with neutron sources

    NARCIS (Netherlands)

    Maucec, M

    2005-01-01

    Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented.

  11. Monte Carlo particle simulation and finite-element techniques for tandem mirror transport

    International Nuclear Information System (INIS)

    Rognlien, T.D.; Cohen, B.I.; Matsuda, Y.; Stewart, J.J. Jr.

    1985-12-01

    A description is given of numerical methods used in the study of axial transport in tandem mirrors owing to Coulomb collisions and rf diffusion. The methods are Monte Carlo particle simulations and direct solution to the Fokker-Planck equations by finite-element expansion. 11 refs

  12. Core map generation for the ITU TRIGA Mark II research reactor using Genetic Algorithm coupled with Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Türkmen, Mehmet, E-mail: tm@hacettepe.edu.tr [Nuclear Engineering Department, Hacettepe University, Beytepe Campus, Ankara (Turkey); Çolak, Üner [Energy Institute, Istanbul Technical University, Ayazağa Campus, Maslak, Istanbul (Turkey); Ergün, Şule [Nuclear Engineering Department, Hacettepe University, Beytepe Campus, Ankara (Turkey)

    2015-12-15

    Highlights: • Optimum core maps were generated for the ITU TRIGA Mark II Research Reactor. • Calculations were performed using a Monte Carlo based reactor physics code, MCNP. • Single-Objective and Multi-Objective Genetic Algorithms were used for the optimization. • k{sub eff} and ppf{sub max} were considered as the optimization objectives. • The generated core maps were compared with the fresh core map. - Abstract: The main purpose of this study is to present the results of Core Map (CM) generation calculations for the İstanbul Technical University TRIGA Mark II Research Reactor by using Genetic Algorithms (GA) coupled with a Monte Carlo (MC) based-particle transport code. Optimization problems under consideration are: (i) maximization of the core excess reactivity (ρ{sub ex}) using Single-Objective GA when the burned fuel elements with no fresh fuel elements are used, (ii) maximization of the ρ{sub ex} and minimization of maximum power peaking factor (ppf{sub max}) using Multi-Objective GA when the burned fuels with fresh fuels are used. The results were obtained when all the control rods are fully withdrawn. ρ{sub ex} and ppf{sub max} values of the produced best CMs were provided. Core-averaged neutron spectrum, and variation of neutron fluxes with respect to radial distance were presented for the best CMs. The results show that it is possible to find an optimum CM with an excess reactivity of 1.17 when the burned fuels are used. In the case of a mix of burned fuels and fresh fuels, the best pattern has an excess reactivity of 1.19 with a maximum peaking factor of 1.4843. In addition, when compared with the fresh CM, the thermal fluxes of the generated CMs decrease by about 2% while change in the fast fluxes is about 1%.Classification: J. Core physics.

  13. Coupling an analytical description of anti-scatter grids with simulation software of radiographic systems using Monte Carlo code

    International Nuclear Information System (INIS)

    Rinkel, J.; Dinten, J.M.; Tabary, J.

    2004-01-01

    The use of focused anti-scatter grids on digital radiographic systems with two-dimensional detectors produces acquisitions with a decreased scatter to primary ratio and thus improved contrast and resolution. Simulation software is of great interest in optimizing grid configuration according to a specific application. Classical simulators are based on complete detailed geometric descriptions of the grid. They are accurate but very time consuming since they use Monte Carlo code to simulate scatter within the high-frequency grids. We propose a new practical method which couples an analytical simulation of the grid interaction with a radiographic system simulation program. First, a two dimensional matrix of probability depending on the grid is created offline, in which the first dimension represents the angle of impact with respect to the normal to the grid lines and the other the energy of the photon. This matrix of probability is then used by the Monte Carlo simulation software in order to provide the final scattered flux image. To evaluate the gain of CPU time, we define the increasing factor as the increase of CPU time of the simulation with as opposed to without the grid. Increasing factors were calculated with the new model and with classical methods representing the grid with its CAD model as part of the object. With the new method, increasing factors are shorter by one to two orders of magnitude compared with the second one. These results were obtained with a difference in calculated scatter of less than five percent between the new and the classical method. (authors)

  14. Coupling the MCNP Monte Carlo code and the FISPACT activation code with automatic visualization of the results of simulations

    International Nuclear Information System (INIS)

    Bourauel, Peter; Nabbi, Rahim; Biel, Wolfgang; Forrest, Robin

    2009-01-01

    The MCNP 3D Monte Carlo computer code is used not only for criticality calculations of nuclear systems but also to simulate transports of radiation and particles. The findings so obtained about neutron flux distribution and the associated spectra allow information about materials activation, nuclear heating, and radiation damage to be obtained by means of activation codes such as FISPACT. The stochastic character of particle and radiation transport processes normally links findings to the materials cells making up the geometry model of MCNP. Where high spatial resolution is required for the activation calculations with FISPACT, fine segmentation of the MCNP geometry becomes compulsory, which implies considerable expense for the modeling process. For this reason, an alternative simulation technique has been developed in an effort to automate and optimize data transfer between MCNP and FISPACT. (orig.)

  15. Numerical simulations of a coupled radiative?conductive heat transfer model using a modified Monte Carlo method

    KAUST Repository

    Kovtanyuk, Andrey E.

    2012-01-01

    Radiative-conductive heat transfer in a medium bounded by two reflecting and radiating plane surfaces is considered. This process is described by a nonlinear system of two differential equations: an equation of the radiative heat transfer and an equation of the conductive heat exchange. The problem is characterized by anisotropic scattering of the medium and by specularly and diffusely reflecting boundaries. For the computation of solutions of this problem, two approaches based on iterative techniques are considered. First, a recursive algorithm based on some modification of the Monte Carlo method is proposed. Second, the diffusion approximation of the radiative transfer equation is utilized. Numerical comparisons of the approaches proposed are given in the case of isotropic scattering. © 2011 Elsevier Ltd. All rights reserved.

  16. Monte Carlo criticality source convergence in a loosely coupled fuel storage system

    International Nuclear Information System (INIS)

    Blomquist, Roger N.; Gelbard, Ely M.

    2003-01-01

    The fission source convergence of a very loosely coupled array of 36 fuel subassemblies with slightly non-symmetric reflection is studied. The fission source converges very slowly from a uniform guess to the fundamental mode in which about 40% of the fissions occur in one corner subassembly. Eigenvalue and fission source estimates are analyzed using a set of statistical tests similar to those used in MCNP, including the 'drift-in-mean' test and a new drift-in-mean test using a linear fit to the cumulative estimate drift, the Shapiro-Wilk test for normality, the relative error test, and the '1/N' test. The normality test does not detect a drifting eigenvalue or fission source. Applied to eigenvalue estimates, the other tests generally fail to detect an unconverged solution, but they are sometimes effective when evaluating fission source distributions. None of the tests provides completely reliable indication of convergence, although they can detect nonconvergence. (author)

  17. Monte Carlo simulation and gaussian broaden techniques for full energy peak of characteristic X-ray in EDXRF

    International Nuclear Information System (INIS)

    Li Zhe; Liu Min; Shi Rui; Wu Xuemei; Tuo Xianguo

    2012-01-01

    Background: Non-standard analysis (NSA) technique is one of the most important development directions of energy dispersive X-ray fluorescence (EDXRF). Purpose: This NSA technique is mainly based on Monte Carlo (MC) simulation and full energy peak broadening, which were studied preliminarily in this paper. Methods: A kind of MC model was established for Si-PIN based EDXRF setup, and the flux spectra were obtained for iron ore sample. Finally, the flux spectra were broadened by Gaussian broaden parameters calculated by a new method proposed in this paper, and the broadened spectra were compared with measured energy spectra. Results: MC method can be used to simulate EDXRF measurement, and can correct the matrix effects among elements automatically. Peak intensities can be obtained accurately by using the proposed Gaussian broaden technique. Conclusions: This study provided a key technique for EDXRF to achieve advanced NSA technology. (authors)

  18. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  19. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  20. Monte Carlo technique applications in field of radiation dosimetry at ENEA radiation protection institute: A Review

    International Nuclear Information System (INIS)

    Gualdrini, G.F.; Casalini, L.; Morelli, B.

    1994-12-01

    The present report summarizes the activities concerned with numerical dosimetry as carried out at the Radiation Protection Institute of ENEA (Italian Agency for New Technologies, Energy and the Environment) on photon dosimetric quantities. The first part is concerned with MCNP Monte Carlo calculation of field parameters and operational quantities for the ICRU sphere with reference photon beams for the design of personal dosemeters. The second part is related with studies on the ADAM anthropomorphic phantom using the SABRINA and MCNP codes. The results of other Monte Carlo studies carried out on electron conversion factors for various tissue equivalent slab phantoms are about to be published in other ENEA reports. The report has been produced in the framework of the EURADOS WG4 (numerical dosimetry) activities within a collaboration between the ENEA Environmental Department and ENEA Energy Department

  1. Novel imaging and quality assurance techniques for ion beam therapy a Monte Carlo study

    CERN Document Server

    Rinaldi, I; Jäkel, O; Mairani, A; Parodi, K

    2010-01-01

    Ion beams exhibit a finite and well defined range in matter together with an “inverted” depth-dose profile, the so-called Bragg peak. These favourable physical properties may enable superior tumour-dose conformality for high precision radiation therapy. On the other hand, they introduce the issue of sensitivity to range uncertainties in ion beam therapy. Although these uncertainties are typically taken into account when planning the treatment, correct delivery of the intended ion beam range has to be assured to prevent undesired underdosage of the tumour or overdosage of critical structures outside the target volume. Therefore, it is necessary to define dedicated Quality Assurance procedures to enable in-vivo range verification before or during therapeutic irradiation. For these purposes, Monte Carlo transport codes are very useful tools to support the development of novel imaging modalities for ion beam therapy. In the present work, we present calculations performed with the FLUKA Monte Carlo code and pr...

  2. Computer simulation of stochastic processes through model-sampling (Monte Carlo) techniques.

    Science.gov (United States)

    Sheppard, C W.

    1969-03-01

    A simple Monte Carlo simulation program is outlined which can be used for the investigation of random-walk problems, for example in diffusion, or the movement of tracers in the blood circulation. The results given by the simulation are compared with those predicted by well-established theory, and it is shown how the model can be expanded to deal with drift, and with reflexion from or adsorption at a boundary.

  3. Spectral history model in DYN3D: Verification against coupled Monte-Carlo thermal-hydraulic code BGCore

    International Nuclear Information System (INIS)

    Bilodid, Y.; Kotlyar, D.; Margulis, M.; Fridman, E.; Shwageraus, E.

    2015-01-01

    Highlights: • Pu-239 based spectral history method was tested on 3D BWR single assembly case. • Burnup of a BWR fuel assembly was performed with the nodal code DYN3D. • Reference solution was obtained by coupled Monte-Carlo thermal-hydraulic code BGCore. • The proposed method accurately reproduces moderator density history effect for BWR test case. - Abstract: This research focuses on the verification of a recently developed methodology accounting for spectral history effects in 3D full core nodal simulations. The traditional deterministic core simulation procedure includes two stages: (1) generation of homogenized macroscopic cross section sets and (2) application of these sets to obtain a full 3D core solution with nodal codes. The standard approach adopts the branch methodology in which the branches represent all expected combinations of operational conditions as a function of burnup (main branch). The main branch is produced for constant, usually averaged, operating conditions (e.g. coolant density). As a result, the spectral history effects that associated with coolant density variation are not taken into account properly. Number of methods to solve this problem (such as micro-depletion and spectral indexes) were developed and implemented in modern nodal codes. Recently, we proposed a new and robust method to account for history effects. The methodology was implemented in DYN3D and involves modification of the few-group cross section sets. The method utilizes the local Pu-239 concentration as an indicator of spectral history. The method was verified for PWR and VVER applications. However, the spectrum variation in BWR core is more pronounced due to the stronger coolant density change. The purpose of the current work is investigating the applicability of the method to BWR analysis. The proposed methodology was verified against recently developed BGCore system, which couples Monte Carlo neutron transport with depletion and thermal-hydraulic solvers and

  4. Fast patient-specific Monte Carlo brachytherapy dose calculations via the correlated sampling variance reduction technique

    Energy Technology Data Exchange (ETDEWEB)

    Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2012-02-15

    Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and

  5. Reliability techniques and Coupled BEM/FEM for interaction pile-soil

    Directory of Open Access Journals (Sweden)

    Ahmed SAHLI

    2017-06-01

    Full Text Available This paper deals with the development of a computational code for the modelling and verification of safety in relation to limit states of piles found in foundations of current structures. To this end, it makes use of reliability techniques for the probabilistic analysis of piles modelled with the finite element method (FEM coupled to the boundary element method (BEM. The soil is modelled with the BEM employing Mindlin's fundamental solutions, suitable for the consideration of a three-dimensional infinite half-space. The piles are modelled as bar elements with the MEF, each of which is represented in the BEM as a loading line. The finite element of the employed bar has four nodes and fourteen nodal parameters, three of which are displacements for each node plus two rotations for the top node. The slipping of the piles in relation to the mass is carried out using adhesion models to define the evolution of the shaft tensions during the transfer of load to the soil. The reliability analysis is based on three methods: first order second moment (FOSM, first order reliability method and Monte Carlo method.

  6. A fully-implicit Particle-In-Cell Monte Carlo Collision code for the simulation of inductively coupled plasmas

    Science.gov (United States)

    Mattei, S.; Nishida, K.; Onai, M.; Lettry, J.; Tran, M. Q.; Hatayama, A.

    2017-12-01

    We present a fully-implicit electromagnetic Particle-In-Cell Monte Carlo collision code, called NINJA, written for the simulation of inductively coupled plasmas. NINJA employs a kinetic enslaved Jacobian-Free Newton Krylov method to solve self-consistently the interaction between the electromagnetic field generated by the radio-frequency coil and the plasma response. The simulated plasma includes a kinetic description of charged and neutral species as well as the collision processes between them. The algorithm allows simulations with cell sizes much larger than the Debye length and time steps in excess of the Courant-Friedrichs-Lewy condition whilst preserving the conservation of the total energy. The code is applied to the simulation of the plasma discharge of the Linac4 H- ion source at CERN. Simulation results of plasma density, temperature and EEDF are discussed and compared with optical emission spectroscopy measurements. A systematic study of the energy conservation as a function of the numerical parameters is presented.

  7. The FLUKA Monte Carlo code coupled with the local effect model for biological calculations in carbon ion therapy

    CERN Document Server

    Mairani, A; Kraemer, M; Sommerer, F; Parodi, K; Scholz, M; Cerutti, F; Ferrari, A; Fasso, A

    2010-01-01

    Clinical Monte Carlo (MC) calculations for carbon ion therapy have to provide absorbed and RBE-weighted dose. The latter is defined as the product of the dose and the relative biological effectiveness (RBE). At the GSI Helmholtzzentrum fur Schwerionenforschung as well as at the Heidelberg Ion Therapy Center (HIT), the RBE values are calculated according to the local effect model (LEM). In this paper, we describe the approach followed for coupling the FLUKA MC code with the LEM and its application to dose and RBE-weighted dose calculations for a superimposition of two opposed C-12 ion fields as applied in therapeutic irradiations. The obtained results are compared with the available experimental data of CHO (Chinese hamster ovary) cell survival and the outcomes of the GSI analytical treatment planning code TRiP98. Some discrepancies have been observed between the analytical and MC calculations of absorbed physical dose profiles, which can be explained by the differences between the laterally integrated depth-d...

  8. Development and evaluation of attenuation and scatter correction techniques for SPECT using the Monte Carlo method

    International Nuclear Information System (INIS)

    Ljungberg, M.

    1990-05-01

    Quantitative scintigrafic images, obtained by NaI(Tl) scintillation cameras, are limited by photon attenuation and contribution from scattered photons. A Monte Carlo program was developed in order to evaluate these effects. Simple source-phantom geometries and more complex nonhomogeneous cases can be simulated. Comparisons with experimental data for both homogeneous and nonhomogeneous regions and with published results have shown good agreement. The usefulness for simulation of parameters in scintillation camera systems, stationary as well as in SPECT systems, has also been demonstrated. An attenuation correction method based on density maps and build-up functions has been developed. The maps were obtained from a transmission measurement using an external 57 Co flood source and the build-up was simulated by the Monte Carlo code. Two scatter correction methods, the dual-window method and the convolution-subtraction method, have been compared using the Monte Carlo method. The aim was to compare the estimated scatter with the true scatter in the photo-peak window. It was concluded that accurate depth-dependent scatter functions are essential for a proper scatter correction. A new scatter and attenuation correction method has been developed based on scatter line-spread functions (SLSF) obtained for different depths and lateral positions in the phantom. An emission image is used to determine the source location in order to estimate the scatter in the photo-peak window. Simulation studies of a clinically realistic source in different positions in cylindrical water phantoms were made for three photon energies. The SLSF-correction method was also evaluated by simulation studies for 1. a myocardial source, 2. uniform source in the lungs and 3. a tumour located in the lungs in a realistic, nonhomogeneous computer phantom. The results showed that quantitative images could be obtained in nonhomogeneous regions. (67 refs.)

  9. The Bjorken sum rule with Monte Carlo and Neural Network techniques

    International Nuclear Information System (INIS)

    Debbio, L. Del; Guffanti, A.; Piccione, A.

    2009-01-01

    Determinations of structure functions and parton distribution functions have been recently obtained using Monte Carlo methods and neural networks as universal, unbiased interpolants for the unknown functional dependence. In this work the same methods are applied to obtain a parametrization of polarized Deep Inelastic Scattering (DIS) structure functions. The Monte Carlo approach provides a bias-free determination of the probability measure in the space of structure functions, while retaining all the information on experimental errors and correlations. In particular the error on the data is propagated into an error on the structure functions that has a clear statistical meaning. We present the application of this method to the parametrization from polarized DIS data of the photon asymmetries A 1 p and A 1 d from which we determine the structure functions g 1 p (x,Q 2 ) and g 1 d (x,Q 2 ), and discuss the possibility to extract physical parameters from these parametrizations. This work can be used as a starting point for the determination of polarized parton distributions.

  10. Coupling Monte Carlo simulations with thermal analysis for correcting microdosimetric spectra from a novel micro-calorimeter

    Science.gov (United States)

    Fathi, K.; Galer, S.; Kirkby, K. J.; Palmans, H.; Nisbet, A.

    2017-11-01

    The high uncertainty in the Relative Biological Effectiveness (RBE) values of particle therapy beam, which are used in combination with the quantity absorbed dose in radiotherapy, together with the increase in the number of particle therapy centres worldwide necessitate a better understating of the biological effect of such modalities. The present novel study is part of performance testing and development of a micro-calorimeter based on Superconducting QUantum Interference Devices (SQUIDs). Unlike other microdosimetric detectors that are used for investigating the energy distribution, this detector provides a direct measurement of energy deposition at the micrometre scale, that can be used to improve our understanding of biological effects in particle therapy application, radiation protection and environmental dosimetry. Temperature rises of less than 1μK are detectable and when combined with the low specific heat capacity of the absorber at cryogenic temperature, extremely high energy deposition sensitivity of approximately 0.4 eV can be achieved. The detector consists of 3 layers: tissue equivalent (TE) absorber, superconducting (SC) absorber and silicon substrate. Ideally all energy would be absorbed in the TE absorber and heat rise in the superconducting layer would arise due to heat conduction from the TE layer. However, in practice direct particle absorption occurs in all 3 layers and must be corrected for. To investigate the thermal behaviour within the detector, and quantify any possible correction, particle tracks were simulated employing Geant4 (v9.6) Monte Carlo simulations. The track information was then passed to the COMSOL Multiphysics (Finite Element Method) software. The 3D heat transfer within each layer was then evaluated in a time-dependent model. For a statistically reliable outcome, the simulations had to be repeated for a large number of particles. An automated system has been developed that couples Geant4 Monte Carlo output to COMSOL for

  11. Dosimetric study of prostate brachytherapy using techniques of Monte-Carlo simulation, experimental measurements and comparison with a treatment plan

    International Nuclear Information System (INIS)

    Teles, Pedro; Barros, Silvia; Vaz, Pedro; Goncalves, Isabel; Facure, Alessandro; Rosa, Luiz da; Santos, Maira; Pereira Junior, Pedro Paulo; Zankl, Maria

    2013-01-01

    Prostate Brachytherapy is a radiotherapy technique, which consists in inserting a number of radioactive seeds (containing, usually, the following radionuclides 125 l, 241 Am or 103 Pd ) surrounding or in the vicinity of, prostate tumor tissue . The main objective of this technique is to maximize the radiation dose to the tumor and minimize it in other tissues and organs healthy, in order to reduce its morbidity. The absorbed dose distribution in the prostate, using this technique is usually non-homogeneous and time dependent. Various parameters such as the type of seed, the attenuation interactions between them, their geometrical arrangement within the prostate, the actual geometry of the seeds,and further swelling of the prostate gland after implantation greatly influence the course of absorbed dose in the prostate and surrounding areas. Quantification of these parameters is therefore extremely important for dose optimization and improvement of their plans conventional treatment, which in many cases not fully take into account. The Monte Carlo techniques allow to study these parameters quickly and effectively. In this work, we use the program MCNPX and generic voxel phantom (GOLEM) where simulated different geometric arrangements of seeds containing 125 I, Amersham Health model of type 6711 in prostates of different sizes, in order to try to quantify some of the parameters. The computational model was validated using a phantom prostate cubic RW3 type , consisting of tissue equivalent, and thermoluminescent dosimeters. Finally, to have a term of comparison with a treatment real plan it was simulate a treatment plan used in a hospital of Rio de Janeiro, with exactly the same parameters, and our computational model. The results obtained in our study seem to indicate that the parameters described above may be a source of uncertainty in the correct evaluation of the dose required for actual treatment plans. The use of Monte Carlo techniques can serve as a complementary

  12. Problems in radiation shielding calculations with Monte Carlo methods

    International Nuclear Information System (INIS)

    Ueki, Kohtaro

    1985-01-01

    The Monte Carlo method is a very useful tool for solving a large class of radiation transport problem. In contrast with deterministic method, geometric complexity is a much less significant problem for Monte Carlo calculations. However, the accuracy of Monte Carlo calculations is of course, limited by statistical error of the quantities to be estimated. In this report, we point out some typical problems to solve a large shielding system including radiation streaming. The Monte Carlo coupling technique was developed to settle such a shielding problem accurately. However, the variance of the Monte Carlo results using the coupling technique of which detectors were located outside the radiation streaming, was still not enough. So as to bring on more accurate results for the detectors located outside the streaming and also for a multi-legged-duct streaming problem, a practicable way of ''Prism Scattering technique'' is proposed in the study. (author)

  13. Couplings

    Science.gov (United States)

    Stošić, Dušan; Auroux, Aline

    Basic principles of calorimetry coupled with other techniques are introduced. These methods are used in heterogeneous catalysis for characterization of acidic, basic and red-ox properties of solid catalysts. Estimation of these features is achieved by monitoring the interaction of various probe molecules with the surface of such materials. Overview of gas phase, as well as liquid phase techniques is given. Special attention is devoted to coupled calorimetry-volumetry method. Furthermore, the influence of different experimental parameters on the results of these techniques is discussed, since it is known that they can significantly influence the evaluation of catalytic properties of investigated materials.

  14. Predicting fissile content of spent nuclear fuel assemblies with the Passive Neutron Albedo Reactivity technique and Monte Carlo code emulation

    International Nuclear Information System (INIS)

    Conlin, Jeremy Lloyd; Tobin, Stephen J.

    2011-01-01

    There is a great need in the safeguards community to be able to nondestructively quantify the mass of plutonium of a spent nuclear fuel assembly. As part of the Next Generation of Safeguards Initiative, we are investigating several techniques, or detector systems, which, when integrated, will be capable of quantifying the plutonium mass of a spent fuel assembly without dismantling the assembly. This paper reports on the simulation of one of these techniques, the Passive Neutron Albedo Reactivity with Fission Chambers (PNAR-FC) system. The response of this system over a wide range of spent fuel assemblies with different burnup, initial enrichment, and cooling time characteristics is shown. A Monte Carlo method of using these modeled results to estimate the fissile content of a spent fuel assembly has been developed. A few numerical simulations of using this method are shown. Finally, additional developments still needed and being worked on are discussed. (author)

  15. Validation and simulation of a regulated survey system through Monte Carlo techniques

    Directory of Open Access Journals (Sweden)

    Asier Lacasta Soto

    2015-07-01

    Full Text Available Channel flow covers long distances and obeys to variable temporal behaviour. It is usually regulated by hydraulic elements as lateralgates to provide a correct of water supply. The dynamics of this kind of flow is governed by a partial differential equations systemnamed shallow water model. They have to be complemented with a simplified formulation for the gates. All the set of equations forma non-linear system that can only be solved numerically. Here, an explicit upwind numerical scheme in finite volumes able to solveall type of flow regimes is used. Hydraulic structures (lateral gates formulation introduces parameters with some uncertainty. Hence,these parameters will be calibrated with a Monte Carlo algorithm obtaining associated coefficients to each gate. Then, they will bechecked, using real cases provided by the monitorizing equipment of the Pina de Ebro channel located in Zaragoza.

  16. Modelling phase separation in Fe-Cr system using different atomistic kinetic Monte Carlo techniques

    International Nuclear Information System (INIS)

    Castin, N.; Bonny, G.; Terentyev, D.; Lavrentiev, M.Yu.; Nguyen-Manh, D.

    2011-01-01

    Atomistic kinetic Monte Carlo (AKMC) simulations were performed to study α-α' phase separation in Fe-Cr alloys. Two different energy models and two approaches to estimate the local vacancy migration barriers were used. The energy models considered are a two-band model Fe-Cr potential and a cluster expansion, both fitted to ab initio data. The classical Kang-Weinberg decomposition, based on the total energy change of the system, and an Artificial Neural Network (ANN), employed as a regression tool were used to predict the local vacancy migration barriers 'on the fly'. The results are compared with experimental thermal annealing data and differences between the applied AKMC approaches are discussed. The ability of the ANN regression method to accurately predict migration barriers not present in the training list is also addressed by performing cross-check calculations using the nudged elastic band method.

  17. The FLUKA Monte Carlo code coupled with the local effect model for biological calculations in carbon ion therapy

    Energy Technology Data Exchange (ETDEWEB)

    Mairani, A [University of Pavia, Department of Nuclear and Theoretical Physics, and INFN, via Bassi 6, 27100 Pavia (Italy); Brons, S; Parodi, K [Heidelberg Ion Beam Therapy Center and Department of Radiation Oncology, Im Neuenheimer Feld 450, 69120 Heidelberg (Germany); Cerutti, F; Ferrari, A; Sommerer, F [CERN, 1211 Geneva 23 (Switzerland); Fasso, A [SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Kraemer, M; Scholz, M, E-mail: Andrea.Mairani@mi.infn.i [GSI Biophysik, Planck-Str. 1, D-64291 Darmstadt (Germany)

    2010-08-07

    Clinical Monte Carlo (MC) calculations for carbon ion therapy have to provide absorbed and RBE-weighted dose. The latter is defined as the product of the dose and the relative biological effectiveness (RBE). At the GSI Helmholtzzentrum fuer Schwerionenforschung as well as at the Heidelberg Ion Therapy Center (HIT), the RBE values are calculated according to the local effect model (LEM). In this paper, we describe the approach followed for coupling the FLUKA MC code with the LEM and its application to dose and RBE-weighted dose calculations for a superimposition of two opposed {sup 12}C ion fields as applied in therapeutic irradiations. The obtained results are compared with the available experimental data of CHO (Chinese hamster ovary) cell survival and the outcomes of the GSI analytical treatment planning code TRiP98. Some discrepancies have been observed between the analytical and MC calculations of absorbed physical dose profiles, which can be explained by the differences between the laterally integrated depth-dose distributions in water used as input basic data in TRiP98 and the FLUKA recalculated ones. On the other hand, taking into account the differences in the physical beam modeling, the FLUKA-based biological calculations of the CHO cell survival profiles are found in good agreement with the experimental data as well with the TRiP98 predictions. The developed approach that combines the MC transport/interaction capability with the same biological model as in the treatment planning system (TPS) will be used at HIT to support validation/improvement of both dose and RBE-weighted dose calculations performed by the analytical TPS.

  18. Monte Carlo study of a ferrimagnetic mixed-spin (2, 5/2) system with the nearest and next-nearest neighbors exchange couplings

    Science.gov (United States)

    Bi, Jiang-lin; Wang, Wei; Li, Qi

    2017-07-01

    In this paper, the effects of the next-nearest neighbors exchange couplings on the magnetic and thermal properties of the ferrimagnetic mixed-spin (2, 5/2) Ising model on a 3D honeycomb lattice have been investigated by the use of Monte Carlo simulation. In particular, the influences of exchange couplings (Ja, Jb, Jan) and the single-ion anisotropy(Da) on the phase diagrams, the total magnetization, the sublattice magnetization, the total susceptibility, the internal energy and the specific heat have been discussed in detail. The results clearly show that the system can express the critical and compensation behavior within the next-nearest neighbors exchange coupling. Great deals of the M curves such as N-, Q-, P- and L-types have been discovered, owing to the competition between the exchange coupling and the temperature. Compared with other theoretical and experimental works, our results have an excellent consistency with theirs.

  19. Applications of Monte Carlo technique in the detection of explosives, narcotics and fissile material using neutron sources

    International Nuclear Information System (INIS)

    Sinha, Amar; Kashyap, Yogesh; Roy, Tushar; Agrawal, Ashish; Sarkar, P.S.; Shukla, Mayank

    2009-01-01

    The problem of illicit trafficking of explosives, narcotics or fissile materials represents a real challenge to civil security. Neutron based detection systems are being actively explored worldwide as a confirmatory tool for applications in the detection of explosives either hidden inside a vehicle or a cargo container or buried inside soil. The development of a system and its experimental testing is a tedious process and to develop such a system each experimental condition needs to be theoretically simulated. Monte Carlo based methods are used to find an optimized design for such detection system. In order to design such systems, it is necessary to optimize source and detector system for each specific application. The present paper deals with such optimization studies using Monte Carlo technique for tagged neutron based system for explosives and narcotics detection hidden in a cargo and landmine detection using backscatter neutrons. We will also discuss some simulation studies on detection of fissile material and photo-neutron source design for applications on cargo scanning. (author)

  20. A technique for generating phase-space-based Monte Carlo beamlets in radiotherapy applications

    International Nuclear Information System (INIS)

    Bush, K; Popescu, I A; Zavgorodni, S

    2008-01-01

    As radiotherapy treatment planning moves toward Monte Carlo (MC) based dose calculation methods, the MC beamlet is becoming an increasingly common optimization entity. At present, methods used to produce MC beamlets have utilized a particle source model (PSM) approach. In this work we outline the implementation of a phase-space-based approach to MC beamlet generation that is expected to provide greater accuracy in beamlet dose distributions. In this approach a standard BEAMnrc phase space is sorted and divided into beamlets with particles labeled using the inheritable particle history variable. This is achieved with the use of an efficient sorting algorithm, capable of sorting a phase space of any size into the required number of beamlets in only two passes. Sorting a phase space of five million particles can be achieved in less than 8 s on a single-core 2.2 GHz CPU. The beamlets can then be transported separately into a patient CT dataset, producing separate dose distributions (doselets). Methods for doselet normalization and conversion of dose to absolute units of Gy for use in intensity modulated radiation therapy (IMRT) plan optimization are also described. (note)

  1. ANALYSIS OF MONTE CARLO SIMULATION SAMPLING TECHNIQUES ON SMALL SIGNAL STABILITY OF WIND GENERATOR- CONNECTED POWER SYSTEM

    Directory of Open Access Journals (Sweden)

    TEMITOPE RAPHAEL AYODELE

    2016-04-01

    Full Text Available Monte Carlo simulation using Simple Random Sampling (SRS technique is popularly known for its ability to handle complex uncertainty problems. However, to produce a reasonable result, it requires huge sample size. This makes it to be computationally expensive, time consuming and unfit for online power system applications. In this article, the performance of Latin Hypercube Sampling (LHS technique is explored and compared with SRS in term of accuracy, robustness and speed for small signal stability application in a wind generator-connected power system. The analysis is performed using probabilistic techniques via eigenvalue analysis on two standard networks (Single Machine Infinite Bus and IEEE 16–machine 68 bus test system. The accuracy of the two sampling techniques is determined by comparing their different sample sizes with the IDEAL (conventional. The robustness is determined based on a significant variance reduction when the experiment is repeated 100 times with different sample sizes using the two sampling techniques in turn. Some of the results show that sample sizes generated from LHS for small signal stability application produces the same result as that of the IDEAL values starting from 100 sample size. This shows that about 100 sample size of random variable generated using LHS method is good enough to produce reasonable results for practical purpose in small signal stability application. It is also revealed that LHS has the least variance when the experiment is repeated 100 times compared to SRS techniques. This signifies the robustness of LHS over that of SRS techniques. 100 sample size of LHS produces the same result as that of the conventional method consisting of 50000 sample size. The reduced sample size required by LHS gives it computational speed advantage (about six times over the conventional method.

  2. Imprecision of dose predictions for radionuclides released to the environment: an application of a Monte Carlo simulation technique

    Energy Technology Data Exchange (ETDEWEB)

    Schwarz, G; Hoffman, F O

    1980-01-01

    An evaluation of the imprecision in dose predictions has been performed using current dose assessment models and present knowledge of the variability or uncertainty in model parameter values. The propagation of parameter uncertainties is demonstrated using a Monte Carlo technique for elemental /sup 131/I transported via the pasture-cow-milk-child pathway. The results indicate that when site-specific information is not available the imprecision inherent in the predictions for this pathway is potentially large. Generally, the 99th percentile in thyroid dose for children was predicted to be approximately an order of magnitude greater than the median value. The potential consequences of the imprecision in dose for radiation protection purposes are discussed.

  3. Dose point kernel simulation for monoenergetic electrons and radionuclides using Monte Carlo techniques.

    Science.gov (United States)

    Wu, J; Liu, Y L; Chang, S J; Chao, M M; Tsai, S Y; Huang, D E

    2012-11-01

    Monte Carlo (MC) simulation has been commonly used in the dose evaluation of radiation accidents and for medical purposes. The accuracy of simulated results is affected by the particle-tracking algorithm, cross-sectional database, random number generator and statistical error. The differences among MC simulation software packages must be validated. This study simulated the dose point kernel (DPK) and the cellular S-values of monoenergetic electrons ranging from 0.01 to 2 MeV and the radionuclides of (90)Y, (177)Lu and (103 m)Rh, using Fluktuierende Kaskade (FLUKA) and MC N-Particle Transport Code Version 5 (MCNP5). A 6-μm-radius cell model consisting of the cell surface, cytoplasm and cell nucleus was constructed for cellular S-value calculation. The mean absolute percentage errors (MAPEs) of the scaled DPKs, simulated using FLUKA and MCNP5, were 7.92, 9.64, 4.62, 3.71 and 3.84 % for 0.01, 0.1, 0.5, 1 and 2 MeV, respectively. For the three radionuclides, the MAPEs of the scaled DPKs were within 5 %. The maximum deviations of S(N←N), S(N←Cy) and S(N←CS) for the electron energy larger than 10 keV were 6.63, 6.77 and 5.24 %, respectively. The deviations for the self-absorbed S-values and cross-dose S-values of the three radionuclides were within 4 %. On the basis of the results of this study, it was concluded that the simulation results are consistent between FLUKA and MCNP5. However, there is a minor inconsistency for low energy range. The DPK and the cellular S-value should be used as the quality assurance tools before the MC simulation results are adopted as the gold standard.

  4. Depth-of-interaction estimates in pixelated scintillator sensors using Monte Carlo techniques

    International Nuclear Information System (INIS)

    Sharma, Diksha; Sze, Christina; Bhandari, Harish; Nagarkar, Vivek; Badano, Aldo

    2017-01-01

    Image quality in thick scintillator detectors can be improved by minimizing parallax errors through depth-of-interaction (DOI) estimation. A novel sensor for low-energy single photon imaging having a thick, transparent, crystalline pixelated micro-columnar CsI:Tl scintillator structure has been described, with possible future application in small-animal single photon emission computed tomography (SPECT) imaging when using thicker structures under development. In order to understand the fundamental limits of this new structure, we introduce cartesianDETECT2, an open-source optical transport package that uses Monte Carlo methods to obtain estimates of DOI for improving spatial resolution of nuclear imaging applications. Optical photon paths are calculated as a function of varying simulation parameters such as columnar surface roughness, bulk, and top-surface absorption. We use scanning electron microscope images to estimate appropriate surface roughness coefficients. Simulation results are analyzed to model and establish patterns between DOI and photon scattering. The effect of varying starting locations of optical photons on the spatial response is studied. Bulk and top-surface absorption fractions were varied to investigate their effect on spatial response as a function of DOI. We investigated the accuracy of our DOI estimation model for a particular screen with various training and testing sets, and for all cases the percent error between the estimated and actual DOI over the majority of the detector thickness was ±5% with a maximum error of up to ±10% at deeper DOIs. In addition, we found that cartesianDETECT2 is computationally five times more efficient than MANTIS. Findings indicate that DOI estimates can be extracted from a double-Gaussian model of the detector response. We observed that our model predicts DOI in pixelated scintillator detectors reasonably well.

  5. Electron Irradiation of Conjunctival Lymphoma-Monte Carlo Simulation of the Minute Dose Distribution and Technique Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Brualla, Lorenzo, E-mail: lorenzo.brualla@uni-due.de [NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Essen (Germany); Zaragoza, Francisco J.; Sempau, Josep [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Barcelona (Spain); Wittig, Andrea [Department of Radiation Oncology, University Hospital Giessen and Marburg, Philipps-University Marburg, Marburg (Germany); Sauerwein, Wolfgang [NCTeam, Strahlenklinik, Universitaetsklinikum Essen, Essen (Germany)

    2012-07-15

    Purpose: External beam radiotherapy is the only conservative curative approach for Stage I non-Hodgkin lymphomas of the conjunctiva. The target volume is geometrically complex because it includes the eyeball and lid conjunctiva. Furthermore, the target volume is adjacent to radiosensitive structures, including the lens, lacrimal glands, cornea, retina, and papilla. The radiotherapy planning and optimization requires accurate calculation of the dose in these anatomical structures that are much smaller than the structures traditionally considered in radiotherapy. Neither conventional treatment planning systems nor dosimetric measurements can reliably determine the dose distribution in these small irradiated volumes. Methods and Materials: The Monte Carlo simulations of a Varian Clinac 2100 C/D and human eye were performed using the PENELOPE and PENEASYLINAC codes. Dose distributions and dose volume histograms were calculated for the bulbar conjunctiva, cornea, lens, retina, papilla, lacrimal gland, and anterior and posterior hemispheres. Results: The simulated results allow choosing the most adequate treatment setup configuration, which is an electron beam energy of 6 MeV with additional bolus and collimation by a cerrobend block with a central cylindrical hole of 3.0 cm diameter and central cylindrical rod of 1.0 cm diameter. Conclusions: Monte Carlo simulation is a useful method to calculate the minute dose distribution in ocular tissue and to optimize the electron irradiation technique in highly critical structures. Using a voxelized eye phantom based on patient computed tomography images, the dose distribution can be estimated with a standard statistical uncertainty of less than 2.4% in 3 min using a computing cluster with 30 cores, which makes this planning technique clinically relevant.

  6. 3D dose imaging for arc therapy techniques by means of Fricke gel dosimetry and dedicated Monte Carlo simulations

    International Nuclear Information System (INIS)

    Valente, Mauro; Castellano, Gustavo; Sosa, Carlos

    2008-01-01

    Full text: Radiotherapy is one of the most effective techniques for tumour treatment and control. During the last years, significant developments were performed regarding both irradiation technology and techniques. However, accurate 3D dosimetric techniques are nowadays not commercially available. Due to their intrinsic characteristics, traditional dosimetric techniques like ionisation chamber, film dosimetry or TLD do not offer proper continuous 3D dose mapping. The possibility of using ferrous sulphate (Fricke) dosimeters suitably fixed to a gel matrix, along with dedicated optical analysis methods, based on light transmission measurements for 3D absorbed dose imaging in tissue-equivalent materials, has become great interest in radiotherapy. Since Gore et al. showed in 1984 that the oxidation of ferrous ions to ferric ions still happen even when fixing the ferrous sulphate solution to a gelatine matrix, important efforts have been dedicated in developing and improving real continuous 3D dosimetric systems based on Fricke solution. The purpose of this work is to investigate the capability and suitability of Fricke gel dosimetry for arc therapy irradiations. The dosimetric system is mainly composed by Fricke gel dosimeters, suitably shaped in form of thin layers and optically analysed by means of visible light transmission measurements, acquiring sample images just before and after irradiation by means of a commercial flatbed-like scanner. Image acquisition, conversion to matrices and further analysis are accomplished by means of dedicated developed software, which includes suitable algorithms for optical density differences calculation and corresponding absorbed dose conversion. Dedicated subroutines allow 3D dose imaging reconstruction from single layer information, by means of computer tomography-like algorithms. Also, dedicated Monte Carlo (PENELOPE) subroutines have been adapted in order to achieve accurate simulation of arc therapy irradiation techniques

  7. Automation techniques applied to systematic studies of moderator coupling

    International Nuclear Information System (INIS)

    Ansell, S.; Garcia, J.F.; Bennington, S.M.; Picton, D.J.; Broome, T.

    2004-01-01

    We present new computation methods in which the geometry management of MCNPX input files can be automated. We demonstrate this method with respect to the target station 2 design at ISIS. The method presented is not a mathematically robust method, but by starting from a valid baseline model allows this model to be mutated in a reliable way. Those features that still need to be improved and added are discussed at the end. We use this method to show that a mixed lead/water pre-moderator gives a better neutron flux to radiation damage ratio for the coupled moderator. (orig.)

  8. Automation techniques applied to systematic studies of moderator coupling

    Energy Technology Data Exchange (ETDEWEB)

    Ansell, S; Garcia, J F; Bennington, S M; Picton, D J; Broome, T [Rutherford Appleton Lab, Chilton (United Kingdom)

    2004-03-01

    We present new computation methods in which the geometry management of MCNPX input files can be automated. We demonstrate this method with respect to the target station 2 design at ISIS. The method presented is not a mathematically robust method, but by starting from a valid baseline model allows this model to be mutated in a reliable way. Those features that still need to be improved and added are discussed at the end. We use this method to show that a mixed lead/water pre-moderator gives a better neutron flux to radiation damage ratio for the coupled moderator. (orig.)

  9. Analysis of the ITER computational shielding benchmark with the Monte Carlo TRIPOLI-4{sup ®} neutron gamma coupled calculations

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yi-Kang, E-mail: yi-kang.lee@cea.fr

    2016-11-01

    Highlights: • Verification and validation of TRIPOLI-4 radiation transport calculations for ITER shielding benchmark. • Evaluation of CEA-V5.1.1 and FENDL-3.0 nuclear data libraries on D–T fusion neutron continuous energy transport calculations. • Advances in nuclear analyses for nuclear heating and radiation damage in iron. • This work also demonstrates that the “safety factors” concept is necessary in the nuclear analyses of ITER. - Abstract: With the growing interest in using the continuous-energy TRIPOLI-4{sup ®} Monte Carlo radiation transport code for ITER applications, a key issue that arises is whether or not the released TRIPOLI-4 code and its associated nuclear data libraries are verified and validated for the D–T fusion neutronics calculations. Previous published benchmark results of TRIPOLI-4 code on the ITER related activities have concentrated on the first wall loading, the reactor dosimetry, the nuclear heating, and the tritium breeding ratio. To enhance the TRIPOLI-4 verification and validation on neutron-gamma coupled calculations for fusion device application, the computational ITER shielding benchmark of M. E. Sawan was performed in this work by using the 2013 released TRIPOLI-4.9S code and the associated CEA-V5.1.1 data library. First wall, blanket, vacuum vessel and toroidal field magnet of the inboard and outboard components were fully modelled in this 1-D toroidal cylindrical benchmark. The 14.1 MeV source neutrons were sampled from a uniform isotropic distribution in the plasma zone. Nuclear responses including neutron and gamma fluxes, nuclear heating, and material damage indicator were benchmarked against previous published results. The capabilities of the TRIPOLI-4 code on the evaluation of above physics parameters were presented. The nuclear data library from the new FENDL-3.0 evaluation was also benchmarked against the CEA-V5.1.1 results for the neutron transport calculations. The results show that both data libraries

  10. MCNP4C2, Coupled Neutron, Electron Gamma 3-D Time-Dependent Monte Carlo Transport Calculations

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: MCNP is a general-purpose, continuous-energy, generalized geometry, time-dependent, coupled neutron-photon-electron Monte Carlo transport code system. MCNP4C2 is an interim release of MCNP4C with distribution restricted to the Criticality Safety community and attendees of the LANL MCNP workshops. The major new features of MCNP4C2 include: - Photonuclear physics; - Interactive plotting; - Plot superimposed weight window mesh; - Implement remaining macro-body surfaces; - Upgrade macro-bodies to surface sources and other capabilities; - Revised summary tables; - Weight window improvements. See the MCNP home page more information http://www-xdiv.lanl.gov/XCI/PROJECTS/MCNP with a link to the MCNP Forum. See the Electronic Notebook at http://www-rsicc.ornl.gov/rsic.html for information on user experiences with MCNP. 2 - Methods:MCNP treats an arbitrary three-dimensional configuration of materials in geometric cells bounded by first- and second-degree surfaces and some special fourth-degree surfaces. Pointwise continuous-energy cross section data are used, although multigroup data may also be used. Fixed-source adjoint calculations may be made with the multigroup data option. For neutrons, all reactions in a particular cross-section evaluation are accounted for. Both free gas and S(alpha, beta) thermal treatments are used. Criticality sources as well as fixed and surface sources are available. For photons, the code takes account of incoherent and coherent scattering with and without electron binding effects, the possibility of fluorescent emission following photoelectric absorption, and absorption in pair production with local emission of annihilation radiation. A very general source and tally structure is available. The tallies have extensive statistical analysis of convergence. Rapid convergence is enabled by a wide variety of variance reduction methods. Energy ranges are 0-60 MeV for neutrons (data generally only available up to

  11. Analysis of the ITER computational shielding benchmark with the Monte Carlo TRIPOLI-4® neutron gamma coupled calculations

    International Nuclear Information System (INIS)

    Lee, Yi-Kang

    2016-01-01

    Highlights: • Verification and validation of TRIPOLI-4 radiation transport calculations for ITER shielding benchmark. • Evaluation of CEA-V5.1.1 and FENDL-3.0 nuclear data libraries on D–T fusion neutron continuous energy transport calculations. • Advances in nuclear analyses for nuclear heating and radiation damage in iron. • This work also demonstrates that the “safety factors” concept is necessary in the nuclear analyses of ITER. - Abstract: With the growing interest in using the continuous-energy TRIPOLI-4 ® Monte Carlo radiation transport code for ITER applications, a key issue that arises is whether or not the released TRIPOLI-4 code and its associated nuclear data libraries are verified and validated for the D–T fusion neutronics calculations. Previous published benchmark results of TRIPOLI-4 code on the ITER related activities have concentrated on the first wall loading, the reactor dosimetry, the nuclear heating, and the tritium breeding ratio. To enhance the TRIPOLI-4 verification and validation on neutron-gamma coupled calculations for fusion device application, the computational ITER shielding benchmark of M. E. Sawan was performed in this work by using the 2013 released TRIPOLI-4.9S code and the associated CEA-V5.1.1 data library. First wall, blanket, vacuum vessel and toroidal field magnet of the inboard and outboard components were fully modelled in this 1-D toroidal cylindrical benchmark. The 14.1 MeV source neutrons were sampled from a uniform isotropic distribution in the plasma zone. Nuclear responses including neutron and gamma fluxes, nuclear heating, and material damage indicator were benchmarked against previous published results. The capabilities of the TRIPOLI-4 code on the evaluation of above physics parameters were presented. The nuclear data library from the new FENDL-3.0 evaluation was also benchmarked against the CEA-V5.1.1 results for the neutron transport calculations. The results show that both data libraries can be

  12. Adjoint acceleration of Monte Carlo simulations using TORT/MCNP coupling approach: A case study on the shielding improvement for the cyclotron room of the Buddhist Tzu Chi General Hospital

    International Nuclear Information System (INIS)

    Sheu, R. J.; Sheu, R. D.; Jiang, S. H.; Kao, C. H.

    2005-01-01

    Full-scale Monte Carlo simulations of the cyclotron room of the Buddhist Tzu Chi General Hospital were carried out to improve the original inadequate maze design. Variance reduction techniques are indispensable in this study to facilitate the simulations for testing a variety of configurations of shielding modification. The TORT/MCNP manual coupling approach based on the Consistent Adjoint Driven Importance Sampling (CADIS) methodology has been used throughout this study. The CADIS utilises the source and transport biasing in a consistent manner. With this method, the computational efficiency was increased significantly by more than two orders of magnitude and the statistical convergence was also improved compared to the unbiased Monte Carlo run. This paper describes the shielding problem encountered, the procedure for coupling the TORT and MCNP codes to accelerate the calculations and the calculation results for the original and improved shielding designs. In order to verify the calculation results and seek additional accelerations, sensitivity studies on the space-dependent and energy-dependent parameters were also conducted. (authors)

  13. Assessment of fusion facility dose rate map using mesh adaptivity enhancements of hybrid Monte Carlo/deterministic techniques

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Wilson, Paul P.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Grove, Robert E.

    2014-01-01

    Highlights: •Calculate the prompt dose rate everywhere throughout the entire fusion energy facility. •Utilize FW-CADIS to accurately perform difficult neutronics calculations for fusion energy systems. •Develop three mesh adaptivity algorithms to enhance FW-CADIS efficiency in fusion-neutronics calculations. -- Abstract: Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer

  14. Monte Carlo simulation of X-ray imaging and spectroscopy experiments using quadric geometry and variance reduction techniques

    Science.gov (United States)

    Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca

    2014-03-01

    The simulation of X-ray imaging experiments is often performed using deterministic codes, which can be relatively fast and easy to use. However, such codes are generally not suitable for the simulation of even slightly more complex experimental conditions, involving, for instance, first-order or higher-order scattering, X-ray fluorescence emissions, or more complex geometries, particularly for experiments that combine spatial resolution with spectral information. In such cases, simulations are often performed using codes based on the Monte Carlo method. In a simple Monte Carlo approach, the interaction position of an X-ray photon and the state of the photon after an interaction are obtained simply according to the theoretical probability distributions. This approach may be quite inefficient because the final channels of interest may include only a limited region of space or photons produced by a rare interaction, e.g., fluorescent emission from elements with very low concentrations. In the field of X-ray fluorescence spectroscopy, this problem has been solved by combining the Monte Carlo method with variance reduction techniques, which can reduce the computation time by several orders of magnitude. In this work, we present a C++ code for the general simulation of X-ray imaging and spectroscopy experiments, based on the application of the Monte Carlo method in combination with variance reduction techniques, with a description of sample geometry based on quadric surfaces. We describe the benefits of the object-oriented approach in terms of code maintenance, the flexibility of the program for the simulation of different experimental conditions and the possibility of easily adding new modules. Sample applications in the fields of X-ray imaging and X-ray spectroscopy are discussed. Catalogue identifier: AERO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERO_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland

  15. On a New Variance Reduction Technique: Neural Network Biasing-a Study of Two Test Cases with the Monte Carlo Code Tripoli4

    International Nuclear Information System (INIS)

    Dumonteil, E.

    2009-01-01

    Various variance-reduction techniques are used in Monte Carlo particle transport. Most of them rely either on a hypothesis made by the user (parameters of the exponential biasing, mesh and weight bounds for weight windows, etc.) or on a previous calculation of the system with, for example, a deterministic solver. This paper deals with a new acceleration technique, namely, auto-adaptative neural network biasing. Indeed, instead of using any a priori knowledge of the system, it is possible, at a given point in a simulation, to use the Monte Carlo histories previously simulated to train a neural network, which, in return, should be able to provide an estimation of the adjoint flux, used then for biasing the simulation. We will describe this method, detail its implementation in the Monte Carlo code Tripoli4, and discuss its results on two test cases. (author)

  16. Extending the reach of strong-coupling: an iterative technique for Hamiltonian lattice models

    International Nuclear Information System (INIS)

    Alberty, J.; Greensite, J.; Patkos, A.

    1983-12-01

    The authors propose an iterative method for doing lattice strong-coupling-like calculations in a range of medium to weak couplings. The method is a modified Lanczos scheme, with greatly improved convergence properties. The technique is tested on the Mathieu equation and on a Hamiltonian finite-chain XY model, with excellent results. (Auth.)

  17. A simple highly accurate field-line mapping technique for three-dimensional Monte Carlo modeling of plasma edge transport

    International Nuclear Information System (INIS)

    Feng, Y.; Sardei, F.; Kisslinger, J.

    2005-01-01

    The paper presents a new simple and accurate numerical field-line mapping technique providing a high-quality representation of field lines as required by a Monte Carlo modeling of plasma edge transport in the complex magnetic boundaries of three-dimensional (3D) toroidal fusion devices. Using a toroidal sequence of precomputed 3D finite flux-tube meshes, the method advances field lines through a simple bilinear, forward/backward symmetric interpolation at the interfaces between two adjacent flux tubes. It is a reversible field-line mapping (RFLM) algorithm ensuring a continuous and unique reconstruction of field lines at any point of the 3D boundary. The reversibility property has a strong impact on the efficiency of modeling the highly anisotropic plasma edge transport in general closed or open configurations of arbitrary ergodicity as it avoids artificial cross-field diffusion of the fast parallel transport. For stellarator-symmetric magnetic configurations, which are the standard case for stellarators, the reversibility additionally provides an average cancellation of the radial interpolation errors of field lines circulating around closed magnetic flux surfaces. The RFLM technique has been implemented in the 3D edge transport code EMC3-EIRENE and is used routinely for plasma transport modeling in the boundaries of several low-shear and high-shear stellarators as well as in the boundary of a tokamak with 3D magnetic edge perturbations

  18. Monte Carlo simulation for scanning technique with scattering foil free electron beam: A proof of concept study.

    Directory of Open Access Journals (Sweden)

    Wonmo Sung

    Full Text Available This study investigated the potential of a newly proposed scattering foil free (SFF electron beam scanning technique for the treatment of skin cancer on the irregular patient surfaces using Monte Carlo (MC simulation. After benchmarking of the MC simulations, we removed the scattering foil to generate SFF electron beams. Cylindrical and spherical phantoms with 1 cm boluses were generated and the target volume was defined from the surface to 5 mm depth. The SFF scanning technique with 6 MeV electrons was simulated using those phantoms. For comparison, volumetric modulated arc therapy (VMAT plans were also generated with two full arcs and 6 MV photon beams. When the scanning resolution resulted in a larger separation between beams than the field size, the plan qualities were worsened. In the cylindrical phantom with a radius of 10 cm, the conformity indices, homogeneity indices and body mean doses of the SFF plans (scanning resolution = 1° vs. VMAT plans were 1.04 vs. 1.54, 1.10 vs. 1.12 and 5 Gy vs. 14 Gy, respectively. Those of the spherical phantom were 1.04 vs. 1.83, 1.08 vs. 1.09 and 7 Gy vs. 26 Gy, respectively. The proposed SFF plans showed superior dose distributions compared to the VMAT plans.

  19. Monte Carlo simulation for scanning technique with scattering foil free electron beam: A proof of concept study.

    Science.gov (United States)

    Sung, Wonmo; Park, Jong In; Kim, Jung-In; Carlson, Joel; Ye, Sung-Joon; Park, Jong Min

    2017-01-01

    This study investigated the potential of a newly proposed scattering foil free (SFF) electron beam scanning technique for the treatment of skin cancer on the irregular patient surfaces using Monte Carlo (MC) simulation. After benchmarking of the MC simulations, we removed the scattering foil to generate SFF electron beams. Cylindrical and spherical phantoms with 1 cm boluses were generated and the target volume was defined from the surface to 5 mm depth. The SFF scanning technique with 6 MeV electrons was simulated using those phantoms. For comparison, volumetric modulated arc therapy (VMAT) plans were also generated with two full arcs and 6 MV photon beams. When the scanning resolution resulted in a larger separation between beams than the field size, the plan qualities were worsened. In the cylindrical phantom with a radius of 10 cm, the conformity indices, homogeneity indices and body mean doses of the SFF plans (scanning resolution = 1°) vs. VMAT plans were 1.04 vs. 1.54, 1.10 vs. 1.12 and 5 Gy vs. 14 Gy, respectively. Those of the spherical phantom were 1.04 vs. 1.83, 1.08 vs. 1.09 and 7 Gy vs. 26 Gy, respectively. The proposed SFF plans showed superior dose distributions compared to the VMAT plans.

  20. Monte Carlo techniques for the study of cancer patients fractionation in head and neck treated with radiotherapy; Tecnicas de Monte Carlo para el estudio del fraccionamiento en pacientes de cancer de cabeza y cuello tratados con radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Carrasco Herrera, M. A.; Jimenez Dominguez, M.; Perucha Ortega, M.; Herrador Cordoba, M.

    2011-07-01

    The dose fractionation than the standard head and neck cancer in some situations involve a significant increase of local control and overall survival. There is clinical evidence of these results in case of hyperfractionated treatments, although the choice of optimal fractionation generally is not obtained from the results of any model, in this study has provided the tumor control probability (TCP) for various subdivisions modified (hypo fractionated and hyperfractionated) using Monte Carlo simulation techniques.

  1. Probabilistic techniques using Monte Carlo sampling for multi- component system diagnostics

    International Nuclear Information System (INIS)

    Aumeier, S.E.; Lee, J.C.; Akcasu, A.Z.

    1995-01-01

    We outline the structure of a new approach at multi-component system fault diagnostics which utilizes detailed system simulation models, uncertain system observation data, statistical knowledge of system parameters, expert opinion, and component reliability data in an effort to identify incipient component performance degradations of arbitrary number and magnitude. The technique involves the use of multiple adaptive Kalman filters for fault estimation, the results of which are screened using standard hypothesis testing procedures to define a set of component events that could have transpired. Latin Hypercube sample each of these feasible component events in terms of uncertain component reliability data and filter estimates. The capabilities of the procedure are demonstrated through the analysis of a simulated small magnitude binary component fault in a boiling water reactor balance of plant. The results show that the procedure has the potential to be a very effective tool for incipient component fault diagnosis

  2. Beveled fiber-optic probe couples a ball lens for improving depth-resolved fluorescence measurements of layered tissue: Monte Carlo simulations

    International Nuclear Information System (INIS)

    Jaillon, Franck; Zheng Wei; Huang Zhiwei

    2008-01-01

    In this study, we evaluate the feasibility of designing a beveled fiber-optic probe coupled with a ball lens for improving depth-resolved fluorescence measurements of epithelial tissue using Monte Carlo (MC) simulations. The results show that by using the probe configuration with a beveled tip collection fiber and a flat tip excitation fiber associated with a ball lens, discrimination of fluorescence signals generated in different tissue depths is achievable. In comparison with a flat-tip collection fiber, the use of a large bevel angled collection fiber enables a better differentiation between the shallow and deep tissue layers by changing the excitation-collection fiber separations. This work suggests that the beveled fiber-optic probe coupled with a ball lens has the potential to facilitate depth-resolved fluorescence measurements of epithelial tissues

  3. An Unsplit Monte-Carlo solver for the resolution of the linear Boltzmann equation coupled to (stiff) Bateman equations

    Science.gov (United States)

    Bernede, Adrien; Poëtte, Gaël

    2018-02-01

    In this paper, we are interested in the resolution of the time-dependent problem of particle transport in a medium whose composition evolves with time due to interactions. As a constraint, we want to use of Monte-Carlo (MC) scheme for the transport phase. A common resolution strategy consists in a splitting between the MC/transport phase and the time discretization scheme/medium evolution phase. After going over and illustrating the main drawbacks of split solvers in a simplified configuration (monokinetic, scalar Bateman problem), we build a new Unsplit MC (UMC) solver improving the accuracy of the solutions, avoiding numerical instabilities, and less sensitive to time discretization. The new solver is essentially based on a Monte Carlo scheme with time dependent cross sections implying the on-the-fly resolution of a reduced model for each MC particle describing the time evolution of the matter along their flight path.

  4. Higgs production enhancement in P-P collisions using Monte Carlo techniques at √s = 13 TeV

    Directory of Open Access Journals (Sweden)

    Soleiman M.H.M.

    2017-01-01

    Full Text Available A precise estimation of the amount of enhancement in Higgs boson production through pp collisions at ultra-relativistic energies throughout promotion of the gluon distribution function inside the protons before the collision is presented here. The study is based mainly on the available Monte Carlo event generators (PYTHIA 8.2.9, SHERPA 2.1.0 running on PCs and CERNX-Machine, respectively, and using the extended invariant mass technique. Generated samples of 1000 events from PYTHIA 8.2.9 and SHERPA,2.1.0 at √s = 13 TeV are used in the investigation of the effect of replacing the parton distribution function (PDF on the Higgs production enhancement. The CTEQ66 and MSRTW2004nlo parton distribution functions are used alternatively on PYTHIA 8.2.9 and SHERPA 2.1.0 event generators in companion with the effects of allowing initial state and final state radiations (ISR and FSR to obtain evidence on the enhancement of the SM-Higgs production depending on the field theoretical model of SM. It is found that, the replacement of PDFs will lead to a significant change in the SM-Higgs production, and the effect of allowing or denying any of ISR or FSR is sound for the two event generators but may be unrealistic in PHYTIA 8.2.9.

  5. Higgs production enhancement in P-P collisions using Monte Carlo techniques at √s = 13 TeV

    Science.gov (United States)

    Soleiman, M. H. M.; Abdel-Aziz, S. S.; Sobhi, M. S. E.

    2017-06-01

    A precise estimation of the amount of enhancement in Higgs boson production through pp collisions at ultra-relativistic energies throughout promotion of the gluon distribution function inside the protons before the collision is presented here. The study is based mainly on the available Monte Carlo event generators (PYTHIA 8.2.9, SHERPA 2.1.0) running on PCs and CERNX-Machine, respectively, and using the extended invariant mass technique. Generated samples of 1000 events from PYTHIA 8.2.9 and SHERPA,2.1.0 at √s = 13 TeV are used in the investigation of the effect of replacing the parton distribution function (PDF) on the Higgs production enhancement. The CTEQ66 and MSRTW2004nlo parton distribution functions are used alternatively on PYTHIA 8.2.9 and SHERPA 2.1.0 event generators in companion with the effects of allowing initial state and final state radiations (ISR and FSR) to obtain evidence on the enhancement of the SM-Higgs production depending on the field theoretical model of SM. It is found that, the replacement of PDFs will lead to a significant change in the SM-Higgs production, and the effect of allowing or denying any of ISR or FSR is sound for the two event generators but may be unrealistic in PHYTIA 8.2.9.

  6. Investigation of Cu(In,Ga)Se{sub 2} using Monte Carlo and the cluster expansion technique

    Energy Technology Data Exchange (ETDEWEB)

    Ludwig, Christian D.R.; Gruhn, Thomas; Felser, Claudia [Institute of Inorganic and Analytical Chemistry, Johannes Gutenberg-University, Mainz (Germany); Windeln, Johannes [IBM Germany, Mgr. Technology Center ISC EMEA, Mainz (Germany)

    2010-07-01

    CIGS based solar cells are among the most promising thin-film techniques for cheap, yet efficient modules. They have been investigated for many years, but the full potential of CIGS cells has not yet been exhausted and many effects are not understood. For instance, the band gap of the absorber material Cu(In,Ga)Se{sub 2} varies with Ga content. The question why solar cells with high Ga content have low efficiencies, despite the fact that the band gap should have the optimum value, is still unanswered. We are using Monte Carlo simulations in combination with a cluster expansion to investigate the homogeneity of the In-Ga distribution as a possible cause of the low efficiency of cells with high Ga content. The cluster expansion is created by a fit to ab initio electronic structure energies. The results we found are crucial for the processing of solar cells, shed light on structural properties and give hints on how to significantly improve solar cell performance. Above the transition temperature from the separated to the mixed phase, we observe different sizes of the In and Ga domains for a given temperature. The In domains in the Ga-rich compound are smaller and less abundant than the Ga domains in the In-rich compound. This translates into the Ga-rich material being less homogeneous.

  7. TRIPOLI-4.3.3 and 4.4, Coupled Neutron, Photon, Electron, Positron 3-D, Time Dependent Monte-Carlo, Transport Calculation

    International Nuclear Information System (INIS)

    Both, J.P.; Mazzolo, A.; Petit, O.; Peneliau, Y.; Roesslinger, B.

    2008-01-01

    1 - Description of program or function: TRIPOLI-4 is a general purpose radiation transport code. It uses the Monte Carlo method to simulate neutron and photon behaviour in three-dimensional geometries. The main areas of applications include but are not restricted to: radiation protection and shielding, nuclear criticality safety, fission and fusion reactor design, nuclear instrumentation. In addition, it can simulate electron-photon cascade showers. It computes particle fluxes and currents and several related physical quantities such as, reaction rates, dose rates, heating, energy deposition, effective multiplication factor, perturbation effects due to density, concentration or partial cross-section variations. The summary precises the types of particles, the nuclear data format and cross sections, the energy ranges, the geometry, the sources, the calculated physical quantities and estimators, the biasing, the time-dependant transport for neutrons, the perturbation, the coupled particle transport and the qualification benchmarks. Data libraries distributed with the TRIPOLI-4: ENDFB6R4, ENDL, JEF2, Mott-Rutherford and Qfission. NEA-1716/04: TRIPOLI-4.4 does not contain the source programs. New features available in TRIPOLI-4 version 4 concern the following points: New biasing features, neutron collision in multigroup homogenized mode, display of the collision sites, ENDF format evaluations, computation of the gamma source produced by neutrons, output format for all results, Verbose level for output warnings, photons reactions rates, XML format output, ENDF format evaluations, combinatorial geometry checks, Green's functions files, and neutronics-shielding coupling. 2 - Methods: The geometry package allows the user to describe a three dimensional configuration by means of surfaces (as in the MCNP code) and also through predefined shapes combine with operators (union, intersection, subtraction...). It is also possible to repeat a pattern to built a network of networks

  8. Use of the X-Ray diffraction technique in the assessment of air quality at Presidente Antônio Carlos Avenue, Belo Horizonte

    Energy Technology Data Exchange (ETDEWEB)

    Cesar, Raisa Helena Sant’Ana; Barreto, Alberto Avelar; Cruz, Ananda Borjaille; Barbosa, João Batista Santos, E-mail: raisa.cesar@cdtn.br, E-mail: aab@cdtn.br, E-mail: abc@cdtn.br, E-mail: jbsb@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte/MG (Brazil); Silva, Igor Felipe Moura, E-mail: igorfelipedx@ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Energia Nuclear

    2017-07-01

    Belo Horizonte is the sixth most populous city in Brazil, has the third largest fleet of vehicles and it is close to large mineralogical reserves, such the Quadrilátero Ferrífero. These factors, coupled with the industrial growth and civil construction, raise questions about society regarding ambient air quality. A historically problematic contaminant is the particulate matter (PM), a mixture of solid and liquid particles in suspension that generate environmental damage and human health. These particles can cause from a simple infection to death, being their dimension a fundamental factor to evaluate the impact caused. In this context, this research investigated the air quality due to PM10 (particles less than 10 microns) in a high traffic flow of Minas Gerais capital, Presidente Antônio Carlos Avenue. This avenue is one of the main accesses to the region of Pampulha, an area of great tourist and sporting relevance of the city and has undergone works of duplication and implementation of exclusive lanes for public transport buses due to the realization of the 2014 World Cup. Involved monitoring in the avenue in the year 2014 in order to collect the PM10 present in the ambient air. The characterization of PM10 occurred with the use of the X-ray diffraction technique, one of the main tools of mineralogical characterization, due to its simplicity, speed and reliability of the obtained results. The minerals detected by the analysis were evaluated for their possible origin, generating information for the evaluation of PM10 emitting sources that are fundamental for the management of air quality in the city. (author)

  9. Use of the X-Ray diffraction technique in the assessment of air quality at Presidente Antônio Carlos Avenue, Belo Horizonte

    International Nuclear Information System (INIS)

    Cesar, Raisa Helena Sant’Ana; Barreto, Alberto Avelar; Cruz, Ananda Borjaille; Barbosa, João Batista Santos; Silva, Igor Felipe Moura

    2017-01-01

    Belo Horizonte is the sixth most populous city in Brazil, has the third largest fleet of vehicles and it is close to large mineralogical reserves, such the Quadrilátero Ferrífero. These factors, coupled with the industrial growth and civil construction, raise questions about society regarding ambient air quality. A historically problematic contaminant is the particulate matter (PM), a mixture of solid and liquid particles in suspension that generate environmental damage and human health. These particles can cause from a simple infection to death, being their dimension a fundamental factor to evaluate the impact caused. In this context, this research investigated the air quality due to PM10 (particles less than 10 microns) in a high traffic flow of Minas Gerais capital, Presidente Antônio Carlos Avenue. This avenue is one of the main accesses to the region of Pampulha, an area of great tourist and sporting relevance of the city and has undergone works of duplication and implementation of exclusive lanes for public transport buses due to the realization of the 2014 World Cup. Involved monitoring in the avenue in the year 2014 in order to collect the PM10 present in the ambient air. The characterization of PM10 occurred with the use of the X-ray diffraction technique, one of the main tools of mineralogical characterization, due to its simplicity, speed and reliability of the obtained results. The minerals detected by the analysis were evaluated for their possible origin, generating information for the evaluation of PM10 emitting sources that are fundamental for the management of air quality in the city. (author)

  10. Applications guide to the RSIC-distributed version of the MCNP code (coupled Monte Carlo neutron-photon Code)

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1985-09-01

    An overview of the RSIC-distributed version of the MCNP code (a soupled Monte Carlo neutron-photon code) is presented. All general features of the code, from machine hardware requirements to theoretical details, are discussed. The current nuclide cross-section and other libraries available in the standard code package are specified, and a realistic example of the flexible geometry input is given. Standard and nonstandard source, estimator, and variance-reduction procedures are outlined. Examples of correct usage and possible misuse of certain code features are presented graphically and in standard output listings. Finally, itemized summaries of sample problems, various MCNP code documentation, and future work are given

  11. Joint Data Assimilation and Parameter Calibration in on-line groundwater modelling using Sequential Monte Carlo techniques

    Science.gov (United States)

    Ramgraber, M.; Schirmer, M.

    2017-12-01

    As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S

  12. Development of phased mission analysis program with Monte Carlo method. Improvement of the variance reduction technique with biasing towards top event

    International Nuclear Information System (INIS)

    Yang Jinan; Mihara, Takatsugu

    1998-12-01

    This report presents a variance reduction technique to estimate the reliability and availability of highly complex systems during phased mission time using the Monte Carlo simulation. In this study, we introduced the variance reduction technique with a concept of distance between the present system state and the cut set configurations. Using this technique, it becomes possible to bias the transition from the operating states to the failed states of components towards the closest cut set. Therefore a component failure can drive the system towards a cut set configuration more effectively. JNC developed the PHAMMON (Phased Mission Analysis Program with Monte Carlo Method) code which involved the two kinds of variance reduction techniques: (1) forced transition, and (2) failure biasing. However, these techniques did not guarantee an effective reduction in variance. For further improvement, a variance reduction technique incorporating the distance concept was introduced to the PHAMMON code and the numerical calculation was carried out for the different design cases of decay heat removal system in a large fast breeder reactor. Our results indicate that the technique addition of this incorporating distance concept is an effective means of further reducing the variance. (author)

  13. Use of Ionizing Radiation by the students of the Faculty of Odontology of the Universidad de San Carlos de Guatemala. Radiographic Techniques evaluation

    International Nuclear Information System (INIS)

    Ramirez Montenegro, E.S. del

    2000-01-01

    In the present thesis an evaluation of the radiographic techniques was made by the students in the clinics of the Faculty of Odontology in the Universidad de San Carlos. The sample was 56 students of fourth and fifth year, an survey form was designed including information about radiographic technique, pacient, film seting up, cone alineation, furthermore exposure repetitions and its cause. It was conclude that paralelism technique is used by 46% of the students, 41% bicectriz technique, 13% both techniques, 100 % aleta mordible. Regarding to equipment set up previous to exposure 88% of the students sets the equipment in acceptable way, 88% used XCP accesory to hold the film without desinfection procedures and it was not set up properly. A 92% of the evaluated student had to repeat the exposures due to wrong application of radiographic techniques

  14. Calculation of absorbed fractions to human skeletal tissues due to alpha particles using the Monte Carlo and 3-d chord-based transport techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, J.G. [Institute of Radiation Protection and Dosimetry, Av. Salvador Allende s/n, Recreio, Rio de Janeiro, CEP 22780-160 (Brazil); Watchman, C.J. [Department of Radiation Oncology, University of Arizona, Tucson, AZ, 85721 (United States); Bolch, W.E. [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL, 32611 (United States); Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611 (United States)

    2007-07-01

    Absorbed fraction (AF) calculations to the human skeletal tissues due to alpha particles are of interest to the internal dosimetry of occupationally exposed workers and members of the public. The transport of alpha particles through the skeletal tissue is complicated by the detailed and complex microscopic histology of the skeleton. In this study, both Monte Carlo and chord-based techniques were applied to the transport of alpha particles through 3-D micro-CT images of the skeletal microstructure of trabecular spongiosa. The Monte Carlo program used was 'Visual Monte Carlo-VMC'. VMC simulates the emission of the alpha particles and their subsequent energy deposition track. The second method applied to alpha transport is the chord-based technique, which randomly generates chord lengths across bone trabeculae and the marrow cavities via alternate and uniform sampling of their cumulative density functions. This paper compares the AF of energy to two radiosensitive skeletal tissues, active marrow and shallow active marrow, obtained with these two techniques. (authors)

  15. Using full configuration interaction quantum Monte Carlo in a seniority zero space to investigate the correlation energy equivalence of pair coupled cluster doubles and doubly occupied configuration interaction

    International Nuclear Information System (INIS)

    Shepherd, James J.; Henderson, Thomas M.; Scuseria, Gustavo E.

    2016-01-01

    Over the past few years, pair coupled cluster doubles (pCCD) has shown promise for the description of strong correlation. This promise is related to its apparent ability to match results from doubly occupied configuration interaction (DOCI), even though the latter method has exponential computational cost. Here, by modifying the full configuration interaction quantum Monte Carlo algorithm to sample only the seniority zero sector of Hilbert space, we show that the DOCI and pCCD energies are in agreement for a variety of 2D Hubbard models, including for systems well out of reach for conventional configuration interaction algorithms. Our calculations are aided by the sign problem being much reduced in the seniority zero space compared with the full space. We present evidence for this and then discuss the sign problem in terms of the wave function of the system which appears to have a simplified sign structure.

  16. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  17. Authenticity study of Phyllanthus species by NMR and FT-IR techniques coupled with chemometric methods

    International Nuclear Information System (INIS)

    Santos, Maiara S.; Pereira-Filho, Edenir R.; Ferreira, Antonio G.; Boffo, Elisangela F.; Figueira, Glyn M.

    2012-01-01

    The importance of medicinal plants and their use in industrial applications is increasing worldwide, especially in Brazil. Phyllanthus species, popularly known as 'quebra-pedras' in Brazil, are used in folk medicine for treating urinary infections and renal calculus. This paper reports an authenticity study, based on herbal drugs from Phyllanthus species, involving commercial and authentic samples using spectroscopic techniques: FT-IR, 1 H HR-MAS NMR and 1 H NMR in solution, combined with chemometric analysis. The spectroscopic techniques evaluated, coupled with chemometric methods, have great potential in the investigation of complex matrices. Furthermore, several metabolites were identified by the NMR techniques. (author)

  18. Authenticity study of Phyllanthus species by NMR and FT-IR Techniques coupled with chemometric methods

    Directory of Open Access Journals (Sweden)

    Maiara S. Santos

    2012-01-01

    Full Text Available The importance of medicinal plants and their use in industrial applications is increasing worldwide, especially in Brazil. Phyllanthus species, popularly known as "quebra-pedras" in Brazil, are used in folk medicine for treating urinary infections and renal calculus. This paper reports an authenticity study, based on herbal drugs from Phyllanthus species, involving commercial and authentic samples using spectroscopic techniques: FT-IR, ¹H HR-MAS NMR and ¹H NMR in solution, combined with chemometric analysis. The spectroscopic techniques evaluated, coupled with chemometric methods, have great potential in the investigation of complex matrices. Furthermore, several metabolites were identified by the NMR techniques.

  19. Authenticity study of Phyllanthus species by NMR and FT-IR techniques coupled with chemometric methods

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Maiara S.; Pereira-Filho, Edenir R.; Ferreira, Antonio G. [Universidade Federal de Sao Carlos (UFSCAR), SP (Brazil). Dept. de Quimica; Boffo, Elisangela F. [Universidade Federal da Bahia (UFBA), Salvador, BA (Brazil). Inst. de Quimica; Figueira, Glyn M., E-mail: maiarassantos@yahoo.com.br [Universidade Estadual de Campinas (UNICAMP), Campinas, SP (Brazil). Centro Pluridisciplinar de Pesquisas Quimicas, Biologicas e Agricolas

    2012-07-01

    The importance of medicinal plants and their use in industrial applications is increasing worldwide, especially in Brazil. Phyllanthus species, popularly known as 'quebra-pedras' in Brazil, are used in folk medicine for treating urinary infections and renal calculus. This paper reports an authenticity study, based on herbal drugs from Phyllanthus species, involving commercial and authentic samples using spectroscopic techniques: FT-IR, {sup 1}H HR-MAS NMR and {sup 1}H NMR in solution, combined with chemometric analysis. The spectroscopic techniques evaluated, coupled with chemometric methods, have great potential in the investigation of complex matrices. Furthermore, several metabolites were identified by the NMR techniques. (author)

  20. Authenticity study of Phyllanthus species by NMR and FT-IR techniques coupled with chemometric methods

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Maiara S.; Pereira-Filho, Edenir R.; Ferreira, Antonio G. [Universidade Federal de Sao Carlos (UFSCAR), SP (Brazil). Dept. de Quimica; Boffo, Elisangela F. [Universidade Federal da Bahia (UFBA), Salvador, BA (Brazil). Inst. de Quimica; Figueira, Glyn M., E-mail: maiarassantos@yahoo.com.br [Universidade Estadual de Campinas (UNICAMP), Campinas, SP (Brazil). Centro Pluridisciplinar de Pesquisas Quimicas, Biologicas e Agricolas

    2012-07-01

    The importance of medicinal plants and their use in industrial applications is increasing worldwide, especially in Brazil. Phyllanthus species, popularly known as 'quebra-pedras' in Brazil, are used in folk medicine for treating urinary infections and renal calculus. This paper reports an authenticity study, based on herbal drugs from Phyllanthus species, involving commercial and authentic samples using spectroscopic techniques: FT-IR, {sup 1}H HR-MAS NMR and {sup 1}H NMR in solution, combined with chemometric analysis. The spectroscopic techniques evaluated, coupled with chemometric methods, have great potential in the investigation of complex matrices. Furthermore, several metabolites were identified by the NMR techniques. (author)

  1. Monte Carlo Simulation of the Time-Of-Flight Technique for the Measurement of Neutron Cross-section in the Pohang Neutron Facility

    Energy Technology Data Exchange (ETDEWEB)

    An, So Hyun; Lee, Young Ouk; Lee, Cheol Woo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Lee, Young Seok [National Fusion Research Institute, Daejeon (Korea, Republic of)

    2007-10-15

    It is essential that neutron cross sections are measured precisely for many areas of research and technique. In Korea, these experiments have been performed in the Pohang Neutron Facility (PNF) with the pulsed neutron facility based on the 100 MeV electron linear accelerator. In PNF, the neutron energy spectra have been measured for different water levels inside the moderator and compared with the results of the MCNPX calculation. The optimum size of the water moderator has been determined on the base of these results. In this study, Monte Carlo simulations for the TOF technique were performed and neutron spectra of neutrons were calculated to predict the measurements.

  2. A Computational Study on the Magnetic Resonance Coupling Technique for Wireless Power Transfer

    Directory of Open Access Journals (Sweden)

    Zakaria N.A.

    2017-01-01

    Full Text Available Non-radiative wireless power transfer (WPT system using magnetic resonance coupling (MRC technique has recently been a topic of discussion among researchers. This technique discussed more scenarios in mid-range field of wireless power transmission reflected to the distance and efficiency. The WPT system efficiency varies when the coupling distance between two coils involved changes. This could lead to a decisive issue of high efficient power transfer. This paper presents case studies on the relationship of operating range with the efficiency of the MRC technique. Demonstrative WPT system operates at two different frequencies are projected in order to verify performance. The resonance frequencies used are less than 100MHz within range of 10cm to 20cm.

  3. Investigation of radiation effects in Hiroshima and Nagasaki using a general Monte Carlo-discrete ordinates coupling scheme

    International Nuclear Information System (INIS)

    Cramer, S.N.; Slater, C.O.

    1990-01-01

    A general adjoint Monte Carlo-forward discrete ordinates radiation transport calculational scheme has been created to study the effects of the radiation environment in Hiroshima and Nagasaki due to the bombing of these two cities. Various such studies for comparison with physical data have progressed since the end of World War II with advancements in computing machinery and computational methods. These efforts have intensified in the last several years with the U.S.-Japan joint reassessment of nuclear weapons dosimetry in Hiroshima and Nagasaki. Three principal areas of investigation are: (1) to determine by experiment and calculation the neutron and gamma-ray energy and angular spectra and total yield of the two weapons; (2) using these weapons descriptions as source terms, to compute radiation effects at several locations in the two cities for comparison with experimental data collected at various times after the bombings and thus validate the source terms; and (3) to compute radiation fields at the known locations of fatalities and surviving individuals at the time of the bombings and thus establish an absolute cause-and-effect relationship between the radiation received and the resulting injuries to these individuals and any of their descendants as indicated by their medical records. It is in connection with the second and third items, the determination of the radiation effects and the dose received by individuals, that the current study is concerned

  4. Determination of output factor for 6 MV small photon beam: comparison between Monte Carlo simulation technique and microDiamond detector

    International Nuclear Information System (INIS)

    Krongkietlearts, K; Tangboonduangjit, P; Paisangittisakul, N

    2016-01-01

    In order to improve the life's quality for a cancer patient, the radiation techniques are constantly evolving. Especially, the two modern techniques which are intensity modulated radiation therapy (IMRT) and volumetric modulated arc therapy (VMAT) are quite promising. They comprise of many small beam sizes (beamlets) with various intensities to achieve the intended radiation dose to the tumor and minimal dose to the nearby normal tissue. The study investigates whether the microDiamond detector (PTW manufacturer), a synthetic single crystal diamond detector, is suitable for small field output factor measurement. The results were compared with those measured by the stereotactic field detector (SFD) and the Monte Carlo simulation (EGSnrc/BEAMnrc/DOSXYZ). The calibration of Monte Carlo simulation was done using the percentage depth dose and dose profile measured by the photon field detector (PFD) of the 10×10 cm 2 field size with 100 cm SSD. Comparison of the values obtained from the calculations and measurements are consistent, no more than 1% difference. The output factors obtained from the microDiamond detector have been compared with those of SFD and Monte Carlo simulation, the results demonstrate the percentage difference of less than 2%. (paper)

  5. Monte Carlo and Quasi-Monte Carlo Sampling

    CERN Document Server

    Lemieux, Christiane

    2009-01-01

    Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.

  6. Vibroacoustic Modeling of Mechanically Coupled Structures: Artificial Spring Technique Applied to Light and Heavy Mediums

    Directory of Open Access Journals (Sweden)

    L. Cheng

    1996-01-01

    Full Text Available This article deals with the modeling of vibrating structures immersed in both light and heavy fluids, and possible applications to noise control problems and industrial vessels containing fluids. A theoretical approach, using artificial spring systems to characterize the mechanical coupling between substructures, is extended to include fluid loading. A structure consisting of a plate-ended cylindrical shell and its enclosed acoustic cavity is analyzed. After a brief description of the proposed technique, a number of numerical results are presented. The analysis addresses the following specific issues: the coupling between the plate and the shell; the coupling between the structure and the enclosure; the possibilities and difficulties regarding internal soundproofing through modifications of the joint connections; and the effects of fluid loading on the vibration of the structure.

  7. Evaluating and adjusting 239Pu, 56Fe, 28Si and 95Mo nuclear data with a Monte Carlo technique

    International Nuclear Information System (INIS)

    Rochman, D.; Koning, A. J.

    2012-01-01

    In this paper, Monte Carlo optimization and nuclear data evaluation are combined to produce optimal adjusted nuclear data files. The methodology is based on the so-called 'Total Monte Carlo' and the TALYS system. Not only a single nuclear data file is produced for a given isotope, but virtually an infinite number, defining probability distributions for each nuclear quantity. Then each of these random nuclear data libraries is used in a series of benchmark calculations. With a goodness-of-fit estimator, best 239 Pu, 56 Fe, 28 Si and 95 Mo evaluations for that benchmark set can be selected. A few thousands of random files are used and each of them is tested with a large number of fast, thermal and intermediate energy criticality benchmarks. From this, the best performing random file is chosen and proposed as the optimum choice among the studied random set. (authors)

  8. Current medical research with the application of coupled techniques with mass spectrometry

    OpenAIRE

    Ka?u?na-Czapli?ska, Joanna

    2011-01-01

    Summary The most effective methods of analysis of organic compounds in biological fluids are coupled chromatographic techniques. Capillary gas chromatography/mass spectrometry (GC-MS) allows the most efficient separation, identification and quantification of volatile metabolites in biological fluids. Liquid chromatography-mass spectrometry (LC-MS) is especially suitable for the analysis of non-volatile and/or thermally unstable compounds. A major drawback of liquid chromatography-mass spectro...

  9. Review of online coupling of sample preparation techniques with liquid chromatography.

    Science.gov (United States)

    Pan, Jialiang; Zhang, Chengjiang; Zhang, Zhuomin; Li, Gongke

    2014-03-07

    Sample preparation is still considered as the bottleneck of the whole analytical procedure, and efforts has been conducted towards the automation, improvement of sensitivity and accuracy, and low comsuption of organic solvents. Development of online sample preparation techniques (SP) coupled with liquid chromatography (LC) is a promising way to achieve these goals, which has attracted great attention. This article reviews the recent advances on the online SP-LC techniques. Various online SP techniques have been described and summarized, including solid-phase-based extraction, liquid-phase-based extraction assisted with membrane, microwave assisted extraction, ultrasonic assisted extraction, accelerated solvent extraction and supercritical fluids extraction. Specially, the coupling approaches of online SP-LC systems and the corresponding interfaces have been discussed and reviewed in detail, such as online injector, autosampler combined with transport unit, desorption chamber and column switching. Typical applications of the online SP-LC techniques have been summarized. Then the problems and expected trends in this field are attempted to be discussed and proposed in order to encourage the further development of online SP-LC techniques. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. A coupled piezoelectric–electromagnetic energy harvesting technique for achieving increased power output through damping matching

    International Nuclear Information System (INIS)

    Challa, Vinod R; Prasad, M G; Fisher, Frank T

    2009-01-01

    Vibration energy harvesting is being pursued as a means to power wireless sensors and ultra-low power autonomous devices. From a design standpoint, matching the electrical damping induced by the energy harvesting mechanism to the mechanical damping in the system is necessary for maximum efficiency. In this work two independent energy harvesting techniques are coupled to provide higher electrical damping within the system. Here the coupled energy harvesting device consists of a primary piezoelectric energy harvesting device to which an electromagnetic component is added to better match the total electrical damping to the mechanical damping in the system. The first coupled device has a resonance frequency of 21.6 Hz and generates a peak power output of ∼332 µW, compared to 257 and 244 µW obtained from the optimized, stand-alone piezoelectric and electromagnetic energy harvesting devices, respectively, resulting in a 30% increase in power output. A theoretical model has been developed which closely agrees with the experimental results. A second coupled device, which utilizes the d 33 piezoelectric mode, shows a 65% increase in power output in comparison to the corresponding stand-alone, single harvesting mode devices. This work illustrates the design considerations and limitations that one must consider to enhance device performance through the coupling of multiple harvesting mechanisms within a single energy harvesting device

  11. Monte Carlo principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center

    1976-03-01

    The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.

  12. Use of a Monte Carlo technique to complete a fragmented set of H2S emission rates from a wastewater treatment plant.

    Science.gov (United States)

    Schauberger, Günther; Piringer, Martin; Baumann-Stanzer, Kathrin; Knauder, Werner; Petz, Erwin

    2013-12-15

    The impact of ambient concentrations in the vicinity of a plant can only be assessed if the emission rate is known. In this study, based on measurements of ambient H2S concentrations and meteorological parameters, the a priori unknown emission rates of a tannery wastewater treatment plant are calculated by an inverse dispersion technique. The calculations are determined using the Gaussian Austrian regulatory dispersion model. Following this method, emission data can be obtained, though only for a measurement station that is positioned such that the wind direction at the measurement station is leeward of the plant. Using the inverse transform sampling, which is a Monte Carlo technique, the dataset can also be completed for those wind directions for which no ambient concentration measurements are available. For the model validation, the measured ambient concentrations are compared with the calculated ambient concentrations obtained from the synthetic emission data of the Monte Carlo model. The cumulative frequency distribution of this new dataset agrees well with the empirical data. This inverse transform sampling method is thus a useful supplement for calculating emission rates using the inverse dispersion technique. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Diagrammatic Monte Carlo approach for diagrammatic extensions of dynamical mean-field theory: Convergence analysis of the dual fermion technique

    Science.gov (United States)

    Gukelberger, Jan; Kozik, Evgeny; Hafermann, Hartmut

    2017-07-01

    The dual fermion approach provides a formally exact prescription for calculating properties of a correlated electron system in terms of a diagrammatic expansion around dynamical mean-field theory (DMFT). Most practical implementations, however, neglect higher-order interaction vertices beyond two-particle scattering in the dual effective action and further truncate the diagrammatic expansion in the two-particle scattering vertex to a leading-order or ladder-type approximation. In this work, we compute the dual fermion expansion for the two-dimensional Hubbard model including all diagram topologies with two-particle interactions to high orders by means of a stochastic diagrammatic Monte Carlo algorithm. We benchmark the obtained self-energy against numerically exact diagrammatic determinant Monte Carlo simulations to systematically assess convergence of the dual fermion series and the validity of these approximations. We observe that, from high temperatures down to the vicinity of the DMFT Néel transition, the dual fermion series converges very quickly to the exact solution in the whole range of Hubbard interactions considered (4 ≤U /t ≤12 ), implying that contributions from higher-order vertices are small. As the temperature is lowered further, we observe slower series convergence, convergence to incorrect solutions, and ultimately divergence. This happens in a regime where magnetic correlations become significant. We find, however, that the self-consistent particle-hole ladder approximation yields reasonable and often even highly accurate results in this regime.

  14. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-01

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  15. Multilevel sequential Monte-Carlo samplers

    KAUST Repository

    Jasra, Ajay

    2016-01-05

    Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.

  16. Determination of trace elements in petroleum products by inductively coupled plasma techniques: A critical review

    International Nuclear Information System (INIS)

    Sánchez, Raquel; Todolí, José Luis; Lienemann, Charles-Philippe; Mermet, Jean-Michel

    2013-01-01

    The fundamentals, applications and latter developments of petroleum products analysis through inductively coupled plasma optical emission spectrometry (ICP-OES) and mass spectrometry (ICP-MS) are revisited in the present bibliographic survey. Sample preparation procedures for the direct analysis of fuels by using liquid sample introduction systems are critically reviewed and compared. The most employed methods are sample dilution, emulsion or micro-emulsion preparation and sample decomposition. The first one is the most widely employed due to its simplicity. Once the sample has been prepared, an organic matrix is usually present. The performance of the sample introduction system (i.e., nebulizer and spray chamber) depends strongly upon the nature and properties of the solution finally obtained. Many different devices have been assayed and the obtained results are shown. Additionally, samples can be introduced into the plasma by using an electrothermal vaporization (ETV) device or a laser ablation system (LA). The recent results published in the literature showing the feasibility, advantages and drawbacks of latter alternatives are also described. Therefore, the main goal of the review is the discussion of the different approaches developed for the analysis of crude oil and its derivates by inductively coupled plasma (ICP) techniques. - Highlights: • Analysis of petroleum products by inductively coupled plasma techniques is revisited. • Fundamental studies are included together with reports dealing with applications. • Conventional and non-conventional sample introduction methods are considered. • Sample preparation methods are critically compared and described

  17. Air-coupled ultrasound: a novel technique for monitoring the curing of thermosetting matrices.

    Science.gov (United States)

    Lionetto, Francesca; Tarzia, Antonella; Maffezzoli, Alfonso

    2007-07-01

    A custom-made, air-coupled ultrasonic device was applied to cure monitoring of thick samples (7-10 mm) of unsaturated polyester resin at room temperature. A key point was the optimization of the experimental setup in order to propagate compression waves during the overall curing reaction by suitable placement of the noncontact transducers, placed on the same side of the test material, in the so-called pitch-catch configuration. The progress of polymerization was monitored through the variation of the time of flight of the propagating longitudinal waves. The exothermic character of the polymerization was taken into account by correcting the measured value of time of flight with that one in air, obtained by sampling the air velocity during the experiment. The air-coupled ultrasonic results were compared with those obtained from conventional contact ultrasonic measurements. The good agreement between the air-coupled ultrasonic results and those obtained by the rheological analysis demonstrated the reliability of air-coupled ultrasound in monitoring the changes of viscoelastic properties at gelation and vitrification. The position of the transducers on the same side of the sample makes this technique suitable for on-line cure monitoring during several composite manufacturing technologies.

  18. Reconstruction of emission coefficients for a non-axisymmetric coupling arc by algebraic reconstruction technique

    International Nuclear Information System (INIS)

    Zhang Guangjun; Xiong Jun; Gao Hongming; Wu Lin

    2011-01-01

    A preliminary investigation of tomographic reconstruction of an asymmetric arc plasma has been carried out. The objective of this work aims at reconstructing emission coefficients of a non-axisymmetric coupling arc from measured intensities by means of an algebraic reconstruction technique (ART). In order to define the optimal experimental scheme for good quality with limited views, the dependence of the reconstruction quality on three configurations (four, eight, ten projection angles) are presented and discussed via a displaced Gaussian model. Then, the emission coefficients of a free burning arc are reconstructed by the ART with the ten-view configuration and an Abel inversion, respectively, and good agreement is obtained. Finally, the emission coefficient profiles of the coupling arc are successfully achieved with the ten-view configuration. The results show that the distribution of emission coefficient for the coupling arc is different from centrosymmetric shape. The ART is perfectly suitable for reconstructing emission coefficients of the coupling arc with the ten-view configuration, proving the feasibility and utility of the ART to characterize an asymmetric arc.

  19. A study of parallelizing O(N) Green-function-based Monte Carlo method for many fermions coupled with classical degrees of freedom

    International Nuclear Information System (INIS)

    Zhang Shixun; Yamagia, Shinichi; Yunoki, Seiji

    2013-01-01

    Models of fermions interacting with classical degrees of freedom are applied to a large variety of systems in condensed matter physics. For this class of models, Weiße [Phys. Rev. Lett. 102, 150604 (2009)] has recently proposed a very efficient numerical method, called O(N) Green-Function-Based Monte Carlo (GFMC) method, where a kernel polynomial expansion technique is used to avoid the full numerical diagonalization of the fermion Hamiltonian matrix of size N, which usually costs O(N 3 ) computational complexity. Motivated by this background, in this paper we apply the GFMC method to the double exchange model in three spatial dimensions. We mainly focus on the implementation of GFMC method using both MPI on a CPU-based cluster and Nvidia's Compute Unified Device Architecture (CUDA) programming techniques on a GPU-based (Graphics Processing Unit based) cluster. The time complexity of the algorithm and the parallel implementation details on the clusters are discussed. We also show the performance scaling for increasing Hamiltonian matrix size and increasing number of nodes, respectively. The performance evaluation indicates that for a 32 3 Hamiltonian a single GPU shows higher performance equivalent to more than 30 CPU cores parallelized using MPI

  20. Vane coupling rings: a simple technique for stabilizing a four-vane radiofrequency quadrupole structure

    International Nuclear Information System (INIS)

    Howard, D.; Lancaster, H.

    1983-01-01

    The benefits of stabilized accelerating structures, with regard to the manufacture and operation, have been well documented. The four-vane radiofrequency quadrupoles (RFQ) presently being designed and constructed in many laboratories are not stabilized because of the weak electromagnetic coupling between the quadrant resonators. This paper presents a simple technique developed at the Lawrence Berkeley Laboratory using vane coupling rings (VCR's) which azimuthally stabilize the RFQ structure and greatly enhance its use as a practical accelerator. In particular, the VCR's: Completely eliminate the dipole modes in the frequency range of interest; Provide adequate quadrant balance with an initial precision mechanical alignment of the vanes; Enhance axial balance and simplify end tuners. Experimental verification tests on a scale model will be discussed

  1. Vane coupling rings: a simple technique for stabilizing a four-vane radiofrequency quadrupole structure

    International Nuclear Information System (INIS)

    Howard, D.; Lancaster, H.

    1982-11-01

    The benefits of stabilized accelerating structures, with regard to the manufacture and operation, have been well documented. The four-vane radiofrequency quadrupoles (RFQ) presently being designed and constructed in many laboratories are not stabilized because of the weak electromagnetic coupling between the quadrant resonators. This paper presents a simple technique developed at the Lawrence Berkeley Laboratory using vane coupling rings (VCR's) which azimuthally stabilize the RFQ structure and greatly enhance its use as a practical accelerator. In particular, the VCR's: completely eliminate the dipole modes in the frequency range of interest; provide adequate quadrant balance with an initial precision mechanical alignment of the vanes; and enhance axial balance and simplify end tuners. Experimental verification tests on a scale model are discussed

  2. Radiation transport simulation in gamma irradiator systems using E G S 4 Monte Carlo code and dose mapping calculations based on point kernel technique

    International Nuclear Information System (INIS)

    Raisali, G.R.

    1992-01-01

    A series of computer codes based on point kernel technique and also Monte Carlo method have been developed. These codes perform radiation transport calculations for irradiator systems having cartesian, cylindrical and mixed geometries. The monte Carlo calculations, the computer code 'EGS4' has been applied to a radiation processing type problem. This code has been acompanied by a specific user code. The set of codes developed include: GCELLS, DOSMAPM, DOSMAPC2 which simulate the radiation transport in gamma irradiator systems having cylinderical, cartesian, and mixed geometries, respectively. The program 'DOSMAP3' based on point kernel technique, has been also developed for dose rate mapping calculations in carrier type gamma irradiators. Another computer program 'CYLDETM' as a user code for EGS4 has been also developed to simulate dose variations near the interface of heterogeneous media in gamma irradiator systems. In addition a system of computer codes 'PRODMIX' has been developed which calculates the absorbed dose in the products with different densities. validation studies of the calculated results versus experimental dosimetry has been performed and good agreement has been obtained

  3. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  4. Optimization of the Kinetic Activation-Relaxation Technique, an off-lattice and self-learning kinetic Monte-Carlo method

    International Nuclear Information System (INIS)

    Joly, Jean-François; Béland, Laurent Karim; Brommer, Peter; Mousseau, Normand; El-Mellouhi, Fedwa

    2012-01-01

    We present two major optimizations for the kinetic Activation-Relaxation Technique (k-ART), an off-lattice self-learning kinetic Monte Carlo (KMC) algorithm with on-the-fly event search THAT has been successfully applied to study a number of semiconducting and metallic systems. K-ART is parallelized in a non-trivial way: A master process uses several worker processes to perform independent event searches for possible events, while all bookkeeping and the actual simulation is performed by the master process. Depending on the complexity of the system studied, the parallelization scales well for tens to more than one hundred processes. For dealing with large systems, we present a near order 1 implementation. Techniques such as Verlet lists, cell decomposition and partial force calculations are implemented, and the CPU time per time step scales sublinearly with the number of particles, providing an efficient use of computational resources.

  5. The longitudinal offset technique for apodization of coupled resonator optical waveguide devices: concept and fabrication tolerance analysis.

    Science.gov (United States)

    Doménech, José David; Muñoz, Pascual; Capmany, José

    2009-11-09

    In this paper, a novel technique to set the coupling constant between cells of a coupled resonator optical waveguide (CROW) device, in order to tailor the filter response, is presented. The technique is demonstrated by simulation assuming a racetrack ring resonator geometry. It consists on changing the effective length of the coupling section by applying a longitudinal offset between the resonators. On the contrary, the conventional techniques are based in the transversal change of the distance between the ring resonators, in steps that are commonly below the current fabrication resolution step (nm scale), leading to strong restrictions in the designs. The proposed longitudinal offset technique allows a more precise control of the coupling and presents an increased robustness against the fabrication limitations, since the needed resolution step is two orders of magnitude higher. Both techniques are compared in terms of the transmission esponse of CROW devices, under finite fabrication resolution steps.

  6. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    International Nuclear Information System (INIS)

    White, Morgan C.

    2000-01-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V and V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second, the ability to

  7. Development and Implementation of Photonuclear Cross-Section Data for Mutually Coupled Neutron-Photon Transport Calculations in the Monte Carlo N-Particle (MCNP) Radiation Transport Code

    Energy Technology Data Exchange (ETDEWEB)

    White, Morgan C. [Univ. of Florida, Gainesville, FL (United States)

    2000-07-01

    The fundamental motivation for the research presented in this dissertation was the need to development a more accurate prediction method for characterization of mixed radiation fields around medical electron accelerators (MEAs). Specifically, a model is developed for simulation of neutron and other particle production from photonuclear reactions and incorporated in the Monte Carlo N-Particle (MCNP) radiation transport code. This extension of the capability within the MCNP code provides for the more accurate assessment of the mixed radiation fields. The Nuclear Theory and Applications group of the Los Alamos National Laboratory has recently provided first-of-a-kind evaluated photonuclear data for a select group of isotopes. These data provide the reaction probabilities as functions of incident photon energy with angular and energy distribution information for all reaction products. The availability of these data is the cornerstone of the new methodology for state-of-the-art mutually coupled photon-neutron transport simulations. The dissertation includes details of the model development and implementation necessary to use the new photonuclear data within MCNP simulations. A new data format has been developed to include tabular photonuclear data. Data are processed from the Evaluated Nuclear Data Format (ENDF) to the new class ''u'' A Compact ENDF (ACE) format using a standalone processing code. MCNP modifications have been completed to enable Monte Carlo sampling of photonuclear reactions. Note that both neutron and gamma production are included in the present model. The new capability has been subjected to extensive verification and validation (V&V) testing. Verification testing has established the expected basic functionality. Two validation projects were undertaken. First, comparisons were made to benchmark data from literature. These calculations demonstrate the accuracy of the new data and transport routines to better than 25 percent. Second

  8. Development of a Fast Fluid-Structure Coupling Technique for Wind Turbine Computations

    DEFF Research Database (Denmark)

    Sessarego, Matias; Ramos García, Néstor; Shen, Wen Zhong

    2015-01-01

    Fluid-structure interaction simulations are routinely used in the wind energy industry to evaluate the aerodynamic and structural dynamic performance of wind turbines. Most aero-elastic codes in modern times implement a blade element momentum technique to model the rotor aerodynamics and a modal......, multi-body, or finite-element approach to model the turbine structural dynamics. The present paper describes a novel fluid-structure coupling technique which combines a threedimensional viscous-inviscid solver for horizontal-axis wind-turbine aerodynamics, called MIRAS, and the structural dynamics model...... used in the aero-elastic code FLEX5. The new code, MIRASFLEX, in general shows good agreement with the standard aero-elastic codes FLEX5 and FAST for various test cases. The structural model in MIRAS-FLEX acts to reduce the aerodynamic load computed by MIRAS, particularly near the tip and at high wind...

  9. Determination of Rare Earth Elements in Thai Monazite by Inductively Coupled Plasma and Nuclear Analytical techniques

    International Nuclear Information System (INIS)

    Busamongkol, Arporn; Ratanapra, Dusadee; Sukharn, Sumalee; Laoharojanaphand, Sirinart

    2003-10-01

    The inductively coupled plasma atomic emission spectroscopy (ICP-AES) for the determination of individual rare-earth elements (REE) was evaluated by comparison with instrumental neutron activation analysis (INAA) and x-ray fluorescence spectrometry (XRF). The accuracy and precision of INAA and ICP-AES were evaluated by using standard reference material IGS-36, a monazite concentrate. For INAA, the results were close to the certified value while ICP-AES were in good agreement except for some low concentration rare earth. The techniques were applied for the analysis of some rare earth elements in two Thai monazite samples preparing as the in-house reference material for the Rare Earth Research and Development Center, Chemistry Division, Office of Atoms for Peace. The analytical results obtained by these techniques were in good agreement with each other

  10. The Use of Coupled Code Technique for Best Estimate Safety Analysis of Nuclear Power Plants

    International Nuclear Information System (INIS)

    Bousbia Salah, A.; D'Auria, F.

    2006-01-01

    Issues connected with the thermal-hydraulics and neutronics of nuclear plants still challenge the design, safety and the operation of Light Water nuclear Reactors (LWR). The lack of full understanding of complex mechanisms related to the interaction between these issues imposed the adoption of conservative safety limits. Those safety margins put restrictions on the optimal exploitation of the plants and consequently reduced economic profit of the plant. In the light of the sustained development in computer technology, the possibilities of code capabilities have been enlarged substantially. Consequently, advanced safety evaluations and design optimizations that were not possible few years ago can now be performed. In fact, during the last decades Best Estimate (BE) neutronic and thermal-hydraulic calculations were so far carried out following rather parallel paths with only few interactions between them. Nowadays, it becomes possible to switch to new generation of computational tools, namely, Coupled Code technique. The application of such method is mandatory for the analysis of accident conditions where strong coupling between the core neutronics and the primary circuit thermal-hydraulics, and more especially when asymmetrical processes take place in the core leading to local space-dependent power generation. Through the current study, a demonstration of the maturity level achieved in the calculation of 3-D core performance during complex accident scenarios in NPPs is emphasized. Typical applications are outlined and discussed showing the main features and limitations of this technique. (author)

  11. Technique for Extension of Small Antenna Array Mutual-Coupling Data to Larger Antenna Arrays

    Science.gov (United States)

    Bailey, M. C.

    1996-01-01

    A technique is presented whereby the mutual interaction between a small number of elements in a planar array can be interpolated and extrapolated to accurately predict the combined interactions in a much larger array of many elements. An approximate series expression is developed, based upon knowledge of the analytical characteristic behavior of the mutual admittance between small aperture antenna elements in a conducting ground plane. This expression is utilized to analytically extend known values for a few spacings and orientations to other element configurations, thus eliminating the need to numerically integrate a large number of highly oscillating and slowly converging functions. This paper shows that the technique can predict very accurately the mutual coupling between elements in a very large planar array with a knowledge of the self-admittance of an isolated element and the coupling between only two-elements arranged in eight different pair combinations. These eight pair combinations do not necessarily have to correspond to pairs in the large array, although all of the individual elements must be identical.

  12. NUMERICAL TECHNIQUES TO SOLVE CONDENSATIONAL AND DISSOLUTIONAL GROWTH EQUATIONS WHEN GROWTH IS COUPLED TO REVERSIBLE REACTIONS (R823186)

    Science.gov (United States)

    Noniterative, unconditionally stable numerical techniques for solving condensational anddissolutional growth equations are given. Growth solutions are compared to Gear-code solutions forthree cases when growth is coupled to reversible equilibrium chemistry. In all cases, ...

  13. ITS version 5.0 :the integrated TIGER series of coupled electron/Photon monte carlo transport codes with CAD geometry.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2005-09-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  14. Evaluation of a scatter correlation technique for single photon transmission measurements in PET by means of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Wegmann, K.; Brix, G.

    2000-01-01

    Purpose: Single photon transmission (SPT) measurements offer a new approach for the determination of attenuation correction factors (ACF) in PET. It was the aim of the present work, to evaluate a scatter correction alogrithm proposed by C. Watson by means of Monte Carlo simulations. Methods: SPT measurements with a Cs-137 point source were simulated for a whole-body PET scanner (ECAT EXACT HR + ) in both the 2D and 3D mode. To examine the scatter fraction (SF) in the transmission data, the detected photons were classified as unscattered or scattered. The simulated data were used to determine (i) the spatial distribution of the SFs, (ii) an ACF sinogram from all detected events (ACF tot ) and (iii) from the unscattered events only (ACF unscattered ), and (iv) an ACF cor =(ACF tot ) 1+Κ sinogram corrected according to the Watson algorithm. In addition, density images were reconstructed in order to quantitatively evaluate linear attenuation coefficients. Results: A high correlation was found between the SF and the ACF tot sinograms. For the cylinder and the EEC phantom, similar correction factors Κ were estimated. The determined values resulted in an accurate scatter correction in both the 2D and 3D mode. (orig.) [de

  15. Evaluation of reconstruction techniques in regional cerebral blood flow SPECT using trade-off plots: a Monte Carlo study.

    Science.gov (United States)

    Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha

    2007-09-01

    The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.

  16. MO-F-CAMPUS-I-03: GPU Accelerated Monte Carlo Technique for Fast Concurrent Image and Dose Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Becchetti, M; Tian, X; Segars, P; Samei, E [Clinical Imaging Physics Group, Department of Radiology, Duke University Me, Durham, NC (United States)

    2015-06-15

    Purpose: To develop an accurate and fast Monte Carlo (MC) method of simulating CT that is capable of correlating dose with image quality using voxelized phantoms. Methods: A realistic voxelized phantom based on patient CT data, XCAT, was used with a GPU accelerated MC code for helical MDCT. Simulations were done with both uniform density organs and with textured organs. The organ doses were validated using previous experimentally validated simulations of the same phantom under the same conditions. Images acquired by tracking photons through the phantom with MC require lengthy computation times due to the large number of photon histories necessary for accurate representation of noise. A substantial speed up of the process was attained by using a low number of photon histories with kernel denoising of the projections from the scattered photons. These FBP reconstructed images were validated against those that were acquired in simulations using many photon histories by ensuring a minimal normalized root mean square error. Results: Organ doses simulated in the XCAT phantom are within 10% of the reference values. Corresponding images attained using projection kernel smoothing were attained with 3 orders of magnitude less computation time compared to a reference simulation using many photon histories. Conclusion: Combining GPU acceleration with kernel denoising of scattered photon projections in MC simulations allows organ dose and corresponding image quality to be attained with reasonable accuracy and substantially reduced computation time than is possible with standard simulation approaches.

  17. MO-F-CAMPUS-I-03: GPU Accelerated Monte Carlo Technique for Fast Concurrent Image and Dose Simulation

    International Nuclear Information System (INIS)

    Becchetti, M; Tian, X; Segars, P; Samei, E

    2015-01-01

    Purpose: To develop an accurate and fast Monte Carlo (MC) method of simulating CT that is capable of correlating dose with image quality using voxelized phantoms. Methods: A realistic voxelized phantom based on patient CT data, XCAT, was used with a GPU accelerated MC code for helical MDCT. Simulations were done with both uniform density organs and with textured organs. The organ doses were validated using previous experimentally validated simulations of the same phantom under the same conditions. Images acquired by tracking photons through the phantom with MC require lengthy computation times due to the large number of photon histories necessary for accurate representation of noise. A substantial speed up of the process was attained by using a low number of photon histories with kernel denoising of the projections from the scattered photons. These FBP reconstructed images were validated against those that were acquired in simulations using many photon histories by ensuring a minimal normalized root mean square error. Results: Organ doses simulated in the XCAT phantom are within 10% of the reference values. Corresponding images attained using projection kernel smoothing were attained with 3 orders of magnitude less computation time compared to a reference simulation using many photon histories. Conclusion: Combining GPU acceleration with kernel denoising of scattered photon projections in MC simulations allows organ dose and corresponding image quality to be attained with reasonable accuracy and substantially reduced computation time than is possible with standard simulation approaches

  18. A full-angle Monte-Carlo scattering technique including cumulative and single-event Rutherford scattering in plasmas

    Science.gov (United States)

    Higginson, Drew P.

    2017-11-01

    We describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event. We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10-3 to 0.3-0.7; the upper limit corresponds to Coulomb logarithm of 20-2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.

  19. Lattice gauge theories and Monte Carlo simulations

    International Nuclear Information System (INIS)

    Rebbi, C.

    1981-11-01

    After some preliminary considerations, the discussion of quantum gauge theories on a Euclidean lattice takes up the definition of Euclidean quantum theory and treatment of the continuum limit; analogy is made with statistical mechanics. Perturbative methods can produce useful results for strong or weak coupling. In the attempts to investigate the properties of the systems for intermediate coupling, numerical methods known as Monte Carlo simulations have proved valuable. The bulk of this paper illustrates the basic ideas underlying the Monte Carlo numerical techniques and the major results achieved with them according to the following program: Monte Carlo simulations (general theory, practical considerations), phase structure of Abelian and non-Abelian models, the observables (coefficient of the linear term in the potential between two static sources at large separation, mass of the lowest excited state with the quantum numbers of the vacuum (the so-called glueball), the potential between two static sources at very small distance, the critical temperature at which sources become deconfined), gauge fields coupled to basonic matter (Higgs) fields, and systems with fermions

  20. Investigation of Reduction of the Uncertainty of Monte Carlo Dose Calculations in Oncor® Clinical Linear Accelerator Simulation Using the DBS Variance Reduction Technique in Monte Carlo Code BEAMnrc

    Directory of Open Access Journals (Sweden)

    Amin Asadi

    2017-10-01

    Full Text Available Purpose: To study the benefits of Directional Bremsstrahlung Splitting (DBS dose variance reduction technique in BEAMnrc Monte Carlo (MC code for Oncor® linac at 6MV and 18MV energies. Materials and Method: A MC model of Oncor® linac was built using BEAMnrc MC Code and verified by the measured data for 6MV and 18MV energies of various field sizes. Then Oncor® machine was modeled running DBS technique, and the efficiency of total fluence and spatial fluence for electron and photon, the efficiency of dose variance reduction of MC calculations for PDD on the central beam axis and lateral dose profile across the nominal field was measured and compared. Result: With applying DBS technique, the total fluence of electron and photon increased in turn 626.8 (6MV and 983.4 (6MV, and 285.6 (18MV and 737.8 (18MV, the spatial fluence of electron and photon improved in turn 308.6±1.35% (6MV and 480.38±0.43% (6MV, and 153±0.9% (18MV and 462.6±0.27% (18MV. Moreover, by running DBS technique, the efficiency of dose variance reduction for PDD MC dose calculations before maximum dose point and after dose maximum point enhanced 187.8±0.68% (6MV and 184.6±0.65% (6MV, 156±0.43% (18MV and 153±0.37% (18MV, respectively, and the efficiency of MC calculations for lateral dose profile remarkably on the central beam axis and across the treatment field raised in turn 197±0.66% (6MV and 214.6±0.73% (6MV, 175±0.36% (18MV and 181.4±0.45% (18MV. Conclusion: Applying dose variance reduction technique of DBS for modeling Oncor® linac with using BEAMnrc MC Code surprisingly improved the fluence of electron and photon, and it therefore enhanced the efficiency of dose variance reduction for MC calculations. As a result, running DBS in different kinds of MC simulation Codes might be beneficent in reducing the uncertainty of MC calculations. 

  1. Radiation shielding techniques and applications. 4. Two-Phase Monte Carlo Approach to Photon Streaming Through Three-Legged Penetrations

    International Nuclear Information System (INIS)

    White, Travis; Hack, Joe; Nathan, Steve; Barnett, Marvin

    2001-01-01

    solutions for scattering of neutrons through multi-legged penetrations are readily available in the literature; similar analytical solutions for photon scattering through penetrations, however, are not. Therefore, computer modeling must be relied upon to perform our analyses. The computer code typically used by Westinghouse SMS in the evaluation of photon transport through complex geometries is the MCNP Monte Carlo computer code. Yet, geometries of this nature can cause problems even with the Monte Carlo codes. Striking a balance between how the code handles bulk transport through the wall with transport through the penetration void, particularly with the use of typical variance reduction methods, is difficult when trying to ensure that all the important regions of the model are sampled appropriately. The problem was broken down into several roughly independent cases. First, scatter through the penetration was considered. Second, bulk transport through the hot leg of the duct and then through the remaining thickness of wall was calculated to determine the amount of supplemental shielding required in the wall. Similar analyses were performed for the middle and cold legs of the penetration. Finally, additional external shielding from radiation streaming through the duct was determined for cases where the minimum offset distance was not feasible. Each case was broken down further into two phases. In the first phase of each case, photons were transported from the source material to an area at the face of the wall, or the opening of the duct, where photon energy and angular distributions were tallied, representing the source incident on the wall or opening. Then, a simplified model for each case was developed and analyzed using the data from the first phase and the new source term. (authors)

  2. Simulation study on the behavior of X-rays and gamma rays in an inhomogeneous medium using the Monte Carlo technique

    International Nuclear Information System (INIS)

    Murase, Kenya; Kataoka, Masaaki; Kawamura, Masashi; Tamada, Shuji; Hamamoto, Ken

    1989-01-01

    A computer program based on the Monte Carlo technique was developed for the analysis of the behavior of X-rays and gamma rays in an inhomogeneous medium. The statistical weight of a photon was introduced and the survival biasing method was used for reducing the statistical error. This computer program has the mass energy absorption and attenuation coefficients for 69 tissues and organs as a database file, and can be applied to various cases of inhomogeneity. The simulation and experimental results of the central axis percent-depth dose in an inhomogeneous phantom were in good agreement. This computer program will be useful for analysis on the behavior of X-rays and gamma rays in an inhomogeneous medium consisting of various tissues and organs, not only in radiotherapy treatment planning but also in diagnostic radiology and in the field treating radiation protection. (author)

  3. Characterisation of lipid fraction of marine macroalgae by means of chromatography techniques coupled to mass spectrometry.

    Science.gov (United States)

    Ragonese, Carla; Tedone, Laura; Beccaria, Marco; Torre, Germana; Cichello, Filomena; Cacciola, Francesco; Dugo, Paola; Mondello, Luigi

    2014-02-15

    In this work the characterisation of the lipid fraction of several species of marine macro algae gathered along the eastern coast of Sicily is reported. Two species of green marine algae (Chloropyceae), two species of red marine algae (Rhodophyceae) and four species of brown marine algae (Pheophyceae) were evaluated in terms of fatty acids, triacylglycerols, pigments and phospholipids profile. Advanced analytical techniques were employed to fully characterise the lipid profile of these Mediterranean seaweeds, such as GC-MS coupled to a novel mass spectra database supported by the simultaneous use of linear retention index (LRI) for the identification of fatty acid profile; LC-MS was employed for the identification of triacylglycerols (TAGs), carotenoids and phospholipids; the determination of accurate mass was carried out on carotenoids and phospholipids. Quantitative data are reported on fatty acids and triacylglycerols as relative percentage of total fraction. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. THE MISSING FATHER FUNCTION IN PSYCHOANALYTIC THEORY AND TECHNIQUE: THE ANALYST'S INTERNAL COUPLE AND MATURING INTIMACY.

    Science.gov (United States)

    Diamond, Michael J

    2017-10-01

    This paper argues that recovering the "missing" paternal function in analytic space is essential for the patient's achievement of mature object relations. Emerging from the helpless infant's contact with primary caregivers, mature intimacy rests on establishing healthy triadic functioning based on an infant-with-mother-and-father. Despite a maternocentric bias in contemporary clinical theory, the emergence of triangularity and the inclusion of the paternal third as a separating element is vital in the analytic dyad. Effective technique requires the analyst's balanced interplay between the paternal, investigative and the maternal, maximally receptive modes of functioning-the good enough analytic couple within the analyst-to serve as the separating element that procreatively fertilizes the capacity for intimacy with a differentiated other. A clinical example illustrates how treatment is limited when the paternal function is minimized within more collusive, unconsciously symbiotic dyads. © 2017 The Psychoanalytic Quarterly, Inc.

  5. Acceleration of calculation of nuclear heating distributions in ITER toroidal field coils using hybrid Monte Carlo/deterministic techniques

    International Nuclear Information System (INIS)

    Ibrahim, Ahmad M.; Polunovskiy, Eduard; Loughlin, Michael J.; Grove, Robert E.; Sawan, Mohamed E.

    2016-01-01

    Highlights: • Assess the detailed distribution of the nuclear heating among the components of the ITER toroidal field coils. • Utilize the FW-CADIS method to dramatically accelerate the calculation of detailed nuclear analysis. • Compare the efficiency and reliability of the FW-CADIS method and the MCNP weight window generator. - Abstract: Because the superconductivity of the ITER toroidal field coils (TFC) must be protected against local overheating, detailed spatial distribution of the TFC nuclear heating is needed to assess the acceptability of the designs of the blanket, vacuum vessel (VV), and VV thermal shield. Accurate Monte Carlo calculations of the distributions of the TFC nuclear heating are challenged by the small volumes of the tally segmentations and by the thick layers of shielding provided by the blanket and VV. To speed up the MCNP calculation of the nuclear heating distribution in different segments of the coil casing, ground insulation, and winding packs of the ITER TFC, the ITER Organization (IO) used the MCNP weight window generator (WWG). The maximum relative uncertainty of the tallies in this calculation was 82.7%. In this work, this MCNP calculation was repeated using variance reduction parameters generated by the Oak Ridge National Laboratory AutomateD VAriaNce reducTion Generator (ADVANTG) code and both MCNP calculations were compared in terms of computational efficiency and reliability. Even though the ADVANTG MCNP calculation used less than one-sixth of the computational resources of the IO calculation, the relative uncertainties of all the tallies in the ADVANTG MCNP calculation were less than 6.1%. The nuclear heating results of the two calculations were significantly different by factors between 1.5 and 2.3 in some of the segments of the furthest winding pack turn from the plasma neutron source. Even though the nuclear heating in this turn may not affect the ITER design because it is much smaller than the nuclear heating in the

  6. Acceleration of calculation of nuclear heating distributions in ITER toroidal field coils using hybrid Monte Carlo/deterministic techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ibrahim, Ahmad M., E-mail: ibrahimam@ornl.gov [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Polunovskiy, Eduard; Loughlin, Michael J. [ITER Organization, Route de Vinon Sur Verdon, 13067 St. Paul Lez Durance (France); Grove, Robert E. [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, TN 37831 (United States); Sawan, Mohamed E. [University of Wisconsin-Madison, 1500 Engineering Dr., Madison, WI 53706 (United States)

    2016-11-01

    Highlights: • Assess the detailed distribution of the nuclear heating among the components of the ITER toroidal field coils. • Utilize the FW-CADIS method to dramatically accelerate the calculation of detailed nuclear analysis. • Compare the efficiency and reliability of the FW-CADIS method and the MCNP weight window generator. - Abstract: Because the superconductivity of the ITER toroidal field coils (TFC) must be protected against local overheating, detailed spatial distribution of the TFC nuclear heating is needed to assess the acceptability of the designs of the blanket, vacuum vessel (VV), and VV thermal shield. Accurate Monte Carlo calculations of the distributions of the TFC nuclear heating are challenged by the small volumes of the tally segmentations and by the thick layers of shielding provided by the blanket and VV. To speed up the MCNP calculation of the nuclear heating distribution in different segments of the coil casing, ground insulation, and winding packs of the ITER TFC, the ITER Organization (IO) used the MCNP weight window generator (WWG). The maximum relative uncertainty of the tallies in this calculation was 82.7%. In this work, this MCNP calculation was repeated using variance reduction parameters generated by the Oak Ridge National Laboratory AutomateD VAriaNce reducTion Generator (ADVANTG) code and both MCNP calculations were compared in terms of computational efficiency and reliability. Even though the ADVANTG MCNP calculation used less than one-sixth of the computational resources of the IO calculation, the relative uncertainties of all the tallies in the ADVANTG MCNP calculation were less than 6.1%. The nuclear heating results of the two calculations were significantly different by factors between 1.5 and 2.3 in some of the segments of the furthest winding pack turn from the plasma neutron source. Even though the nuclear heating in this turn may not affect the ITER design because it is much smaller than the nuclear heating in the

  7. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)

    2007-09-21

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.

  8. Ant colony method to control variance reduction techniques in the Monte Carlo simulation of clinical electron linear accelerators

    International Nuclear Information System (INIS)

    Garcia-Pareja, S.; Vilches, M.; Lallena, A.M.

    2007-01-01

    The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool

  9. Plasma non-uniformity in a symmetric radiofrequency capacitively-coupled reactor with dielectric side-wall: a two dimensional particle-in-cell/Monte Carlo collision simulation

    Science.gov (United States)

    Liu, Yue; Booth, Jean-Paul; Chabert, Pascal

    2018-02-01

    A Cartesian-coordinate two-dimensional electrostatic particle-in-cell/Monte Carlo collision (PIC/MCC) plasma simulation code is presented, including a new treatment of charge balance at dielectric boundaries. It is used to simulate an Ar plasma in a symmetric radiofrequency capacitively-coupled parallel-plate reactor with a thick (3.5 cm) dielectric side-wall. The reactor size (12 cm electrode width, 2.5 cm electrode spacing) and frequency (15 MHz) are such that electromagnetic effects can be ignored. The dielectric side-wall effectively shields the plasma from the enhanced electric field at the powered-grounded electrode junction, which has previously been shown to produce locally enhanced plasma density (Dalvie et al 1993 Appl. Phys. Lett. 62 3207-9 Overzet and Hopkins 1993 Appl. Phys. Lett. 63 2484-6 Boeuf and Pitchford 1995 Phys. Rev. E 51 1376-90). Nevertheless, enhanced electron heating is observed in a region adjacent to the dielectric boundary, leading to maxima in ionization rate, plasma density and ion flux to the electrodes in this region, and not at the reactor centre as would otherwise be expected. The axially-integrated electron power deposition peaks closer to the dielectric edge than the electron density. The electron heating components are derived from the PIC/MCC simulations and show that this enhanced electron heating results from increased Ohmic heating in the axial direction as the electron density decreases towards the side-wall. We investigated the validity of different analytical formulas to estimate the Ohmic heating by comparing them to the PIC results. The widespread assumption that a time-averaged momentum transfer frequency, v m , can be used to estimate the momentum change can cause large errors, since it neglects both phase and amplitude information. Furthermore, the classical relationship between the total electron current and the electric field must be used with caution, particularly close to the dielectric edge where the (neglected

  10. SU-E-T-112: An OpenCL-Based Cross-Platform Monte Carlo Dose Engine (oclMC) for Coupled Photon-Electron Transport

    International Nuclear Information System (INIS)

    Tian, Z; Shi, F; Folkerts, M; Qin, N; Jiang, S; Jia, X

    2015-01-01

    Purpose: Low computational efficiency of Monte Carlo (MC) dose calculation impedes its clinical applications. Although a number of MC dose packages have been developed over the past few years, enabling fast MC dose calculations, most of these packages were developed under NVidia’s CUDA environment. This limited their code portability to other platforms, hindering the introduction of GPU-based MC dose engines to clinical practice. To solve this problem, we developed a cross-platform fast MC dose engine named oclMC under OpenCL environment for external photon and electron radiotherapy. Methods: Coupled photon-electron simulation was implemented with standard analogue simulation scheme for photon transport and Class II condensed history scheme for electron transport. We tested the accuracy and efficiency of oclMC by comparing the doses calculated using oclMC and gDPM, a previously developed GPU-based MC code on NVidia GPU platform, for a 15MeV electron beam and a 6MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. We also tested code portability of oclMC on different devices, including an NVidia GPU, two AMD GPUs and an Intel CPU. Results: Satisfactory agreements were observed in all photon and electron cases, with ∼0.48%–0.53% average dose differences at regions within 10% isodose line for electron beam cases and ∼0.15%–0.17% for photon beam cases. It took oclMC 3–4 sec to perform transport simulation for electron beam on NVidia Titan GPU and 35–51 sec for photon beam, both with ∼0.5% statistical uncertainty. The computation was 6%–17% slower than gDPM due to the differences in both physics model and development environment, which is considered not significant for clinical applications. In terms of code portability, gDPM only runs on NVidia GPUs, while oclMC successfully runs on all the tested devices. Conclusion: oclMC is an accurate and fast MC dose engine. Its high cross

  11. SU-E-T-112: An OpenCL-Based Cross-Platform Monte Carlo Dose Engine (oclMC) for Coupled Photon-Electron Transport

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Z; Shi, F; Folkerts, M; Qin, N; Jiang, S; Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2015-06-15

    Purpose: Low computational efficiency of Monte Carlo (MC) dose calculation impedes its clinical applications. Although a number of MC dose packages have been developed over the past few years, enabling fast MC dose calculations, most of these packages were developed under NVidia’s CUDA environment. This limited their code portability to other platforms, hindering the introduction of GPU-based MC dose engines to clinical practice. To solve this problem, we developed a cross-platform fast MC dose engine named oclMC under OpenCL environment for external photon and electron radiotherapy. Methods: Coupled photon-electron simulation was implemented with standard analogue simulation scheme for photon transport and Class II condensed history scheme for electron transport. We tested the accuracy and efficiency of oclMC by comparing the doses calculated using oclMC and gDPM, a previously developed GPU-based MC code on NVidia GPU platform, for a 15MeV electron beam and a 6MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. We also tested code portability of oclMC on different devices, including an NVidia GPU, two AMD GPUs and an Intel CPU. Results: Satisfactory agreements were observed in all photon and electron cases, with ∼0.48%–0.53% average dose differences at regions within 10% isodose line for electron beam cases and ∼0.15%–0.17% for photon beam cases. It took oclMC 3–4 sec to perform transport simulation for electron beam on NVidia Titan GPU and 35–51 sec for photon beam, both with ∼0.5% statistical uncertainty. The computation was 6%–17% slower than gDPM due to the differences in both physics model and development environment, which is considered not significant for clinical applications. In terms of code portability, gDPM only runs on NVidia GPUs, while oclMC successfully runs on all the tested devices. Conclusion: oclMC is an accurate and fast MC dose engine. Its high cross

  12. Artificial intelligence techniques coupled with seasonality measures for hydrological regionalization of Q90 under Brazilian conditions

    Science.gov (United States)

    Beskow, Samuel; de Mello, Carlos Rogério; Vargas, Marcelle M.; Corrêa, Leonardo de L.; Caldeira, Tamara L.; Durães, Matheus F.; de Aguiar, Marilton S.

    2016-10-01

    Information on stream flows is essential for water resources management. The stream flow that is equaled or exceeded 90% of the time (Q90) is one the most used low stream flow indicators in many countries, and its determination is made from the frequency analysis of stream flows considering a historical series. However, stream flow gauging network is generally not spatially sufficient to meet the necessary demands of technicians, thus the most plausible alternative is the use of hydrological regionalization. The objective of this study was to couple the artificial intelligence techniques (AI) K-means, Partitioning Around Medoids (PAM), K-harmonic means (KHM), Fuzzy C-means (FCM) and Genetic K-means (GKA), with measures of low stream flow seasonality, for verification of its potential to delineate hydrologically homogeneous regions for the regionalization of Q90. For the performance analysis of the proposed methodology, location attributes from 108 watersheds situated in southern Brazil, and attributes associated with their seasonality of low stream flows were considered in this study. It was concluded that: (i) AI techniques have the potential to delineate hydrologically homogeneous regions in the context of Q90 in the study region, especially the FCM method based on fuzzy logic, and GKA, based on genetic algorithms; (ii) the attributes related to seasonality of low stream flows added important information that increased the accuracy of the grouping; and (iii) the adjusted mathematical models have excellent performance and can be used to estimate Q90 in locations lacking monitoring.

  13. Congestive heart failure, spouses' support and the couple's sleep situation: a critical incident technique analysis.

    Science.gov (United States)

    Broström, Anders; Strömberg, Anna; Dahlström, Ulf; Fridlund, Bengt

    2003-03-01

    Sleep related breathing disorders are common as well as a poor prognostic sign associated with higher mortality in patients with congestive heart failure (CHF). These patients often have a shorter total duration of sleep, disturbed sleep structure and increased daytime sleepiness, which can negatively affect all dimensions of the life situation. The spouse has an important role in supporting the patient in relation to sleep disorders, but this role may be adversely affected by the sleep situation of the couple. The aim of this study was to describe decisive situations that influence spouses' support to patients with CHF in relation to the couple's sleep situation. A qualitative descriptive design using critical incident technique was employed. Incidents were collected by means of interviews with 25 spouses of patients with CHF, strategically selected from two hospital-based specialist clinics in southern Sweden. Two main areas emerged in the analysis: support stimulating situations and support inhibiting situations. Support stimulating situations described how spouses' support was positively affected by their own adaptation in psychosocial or practical situations, and receiving help from others. Support inhibiting situations described how the spouses' support was negatively affected by sleep disturbances as a result of the patient's symptoms, anxiety in relation to the disease, limitations as a result of the sleeping habits, dissatisfaction with care related to the sleep situation, and being left to cope alone with the problems. An increased understanding of the stimulating and inhibiting situations influencing spouses' support for patients with CHF can guide health care personnel in deciding if an intervention is needed to improve the sleep situation for patient and spouse.

  14. A dual resolution measurement based Monte Carlo simulation technique for detailed dose analysis of small volume organs in the skull base region

    International Nuclear Information System (INIS)

    Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi

    2014-01-01

    The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm 3 ] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm 3 and was sandwiched in between 0.05×0.05×0.3 cm 3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm 3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×10 8 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular

  15. The coupling technique: A two-wave acoustic method for the study of dislocation dynamics

    Science.gov (United States)

    Gremaud, G.; Bujard, M.; Benoit, W.

    1987-03-01

    Progress in the study of dislocation dynamics has been achieved using a two-wave acoustic method, which has been called the coupling technique. In this method, the attenuation α and the velocity v of ultrasonic waves are measured in a sample submitted simultaneously to a harmonic stress σ of low frequency. Closed curves Δα(σ) and Δv/v(σ) are drawn during each cycle of the applied stress. The shapes of these curves and their evolution are characteristic of each dislocation motion mechanism which is activated by the low-frequency applied stress. For this reason, the closed curves Δα(σ) and Δv/v(σ) can be considered as signatures of the interaction mechanism which controls the low-frequency dislocation motion. In this paper, the concept of signature is presented and explained with some experimental examples. It will also be shown that theoretical models can be developed which explain very well the experimental results.

  16. Calculation of light delay for coupled microrings by FDTD technique and Padé approximation.

    Science.gov (United States)

    Huang, Yong-Zhen; Yang, Yue-De

    2009-11-01

    The Padé approximation with Baker's algorithm is compared with the least-squares Prony method and the generalized pencil-of-functions (GPOF) method for calculating mode frequencies and mode Q factors for coupled optical microdisks by FDTD technique. Comparisons of intensity spectra and the corresponding mode frequencies and Q factors show that the Padé approximation can yield more stable results than the Prony and the GPOF methods, especially the intensity spectrum. The results of the Prony method and the GPOF method are greatly influenced by the selected number of resonant modes, which need to be optimized during the data processing, in addition to the length of the time response signal. Furthermore, the Padé approximation is applied to calculate light delay for embedded microring resonators from complex transmission spectra obtained by the Padé approximation from a FDTD output. The Prony and the GPOF methods cannot be applied to calculate the transmission spectra, because the transmission signal obtained by the FDTD simulation cannot be expressed as a sum of damped complex exponentials.

  17. Monte carlo feasibility study of an active neutron assay technique for full-volume UF{sub 6} cylinder assay using a correlated interrogation source

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Karen A., E-mail: kamiller@lanl.gov [Los Alamos National Laboratory, Los Alamos, P.O. Box 1663 MS E540, NM 87545 (United States); Menlove, Howard O.; Swinhoe, Martyn T.; Marlow, Johnna B. [Los Alamos National Laboratory, Los Alamos, P.O. Box 1663 MS E540, NM 87545 (United States)

    2013-03-01

    Uranium cylinder assay plays an important role in the nuclear material accounting at gas centrifuge enrichment plants. The Passive Neutron Enrichment Meter (PNEM) was designed to determine uranium mass and enrichment in 30B and 48Y cylinders using total neutron and coincidence counting in the passive mode. 30B and 48Y cylinders are used to hold bulk UF{sub 6} feed, product, and tails at enrichment plants. In this paper, we report the results of a Monte-Carlo-based feasibility study for an active uranium cylinder assay system based on the PNEM design. There are many advantages of the active technique such as a shortened count time and a more direct measure of {sup 235}U content. The active system is based on a modified PNEM design and uses a {sup 252}Cf source as the correlated, active interrogation source. We show through comparison with a random AmLi source of equal strength how the use of a correlated driver significantly boosts the active signal and reduces the statistical uncertainty. We also discuss ways in which an active uranium cylinder assay system can be optimized to minimize background from {sup 238}U fast-neutron induced fission and direct counts from the interrogation source.

  18. Estimation of the heat generation in vitrified waste product and shield thickness of the cask for the transportation of vitrified waste product using Monte Carlo technique

    International Nuclear Information System (INIS)

    Deepa, A.K.; Jakhete, A.P.; Mehta, D.; Kaushik, C.P.

    2011-01-01

    High Level Liquid waste (HLW) generated during reprocessing of spent fuel contains most of the radioactivity present in the spent fuel resulting in the need for isolation and surveillance for extended period of time. Major components in HLW are the corrosion products, fission products such as 137 Cs, 90 Sr, 106 Ru, 144 Ce, 125 Sb etc, actinides and various chemicals used during reprocessing of spent fuel. Fresh HLW having an activity concentration of around 100Ci/l is to be vitrified into borosilicate glass and packed in canisters which are placed in S.S overpacks for better confinement. These overpacks contain around 0.7 Million Curies of activity. Characterisation of activity in HLW and activity profile of radionuclides for various cooling periods sets the base for the study. For transporting the vitrified waste product (VWP), two most important parameters is the shield thickness of the transportation cask and the heat generation in the waste product. This paper describes the methodology used in the estimation of lead thickness for the transportation cask using the Monte Carlo Technique. Heat generation due to decay of fission products results in the increase in temperature of the vitrified waste product during interim storage and disposal. Glass being the material, not having very high thermal conductivity, temperature difference between the canister and surrounding bears significance in view of the possibility of temperature based devitrification of VWP. The heat generation in the canister and the overpack containing vitrified glass is also estimated using MCNP. (author)

  19. Yield stress in metallic glasses: The jamming-unjamming transition studied through Monte Carlo simulations based on the activation-relaxation technique

    International Nuclear Information System (INIS)

    Rodney, David; Schuh, Christopher A.

    2009-01-01

    A Monte Carlo approach allowing for stress control is employed to study the yield stress of a two-dimensional metallic glass in the limit of low temperatures and long (infinite) time scales. The elementary thermally activated events are determined using the activation-relaxation technique (ART). By tracking the minimum-energy state of the glass for various applied stresses, we find a well-defined jamming-unjamming transition at a yield stress about 30% lower than the steady-state flow stress obtained in conventional strain-controlled quasistatic simulations. ART is then used to determine the evolution of the distribution of thermally activated events in the glass microstructure both below and above the yield stress. We show that aging below the yield stress increases the stability of the glass, both thermodynamically (the internal potential energy decreases) and dynamically (the aged glass is surrounded by higher-energy barriers than the initial quenched configuration). In contrast, deformation above the yield stress brings the glass into a high internal potential energy state that is only marginally stable, being surrounded by a high density of low-energy barriers. The strong influence of deformation on the glass state is also evidenced by the microstructure polarization, revealed here through an asymmetry of the distribution of thermally activated inelastic strains in glasses after simple shear deformation.

  20. Transmission and group-delay characterization of coupled resonator optical waveguides apodized through the longitudinal offset technique.

    Science.gov (United States)

    Doménech, J D; Muñoz, P; Capmany, J

    2011-01-15

    In this Letter, the amplitude and group delay characteristics of coupled resonator optical waveguides apodized through the longitudinal offset technique are presented. The devices have been fabricated in silicon-on-insulator technology employing deep ultraviolet lithography. The structures analyzed consisted of three racetracks resonators uniform (nonapodized) and apodized with the aforementioned technique, showing a delay of 5 ± 3 ps and 4 ± 0.5 ps over 1.6 and 1.4 nm bandwidths, respectively.

  1. Quantum Monte Carlo formulation of volume polarization in dielectric continuum theory

    NARCIS (Netherlands)

    Amovilli, Claudio; Filippi, Claudia; Floris, Franca Maria

    2008-01-01

    We present a novel formulation based on quantum Monte Carlo techniques for the treatment of volume polarization due to quantum mechanical penetration of the solute charge density in the solvent domain. The method allows to accurately solve Poisson’s equation of the solvation model coupled with the

  2. Ionizing radiation effects in Acai oil analysed by gas chromatography coupled to mass spectrometry technique

    International Nuclear Information System (INIS)

    Valli, Felipe; Fernandes, Carlos Eduardo; Moura, Sergio; Machado, Ana Carolina; Furasawa, Helio Akira; Pires, Maria Aparecida Faustino; Bustillos, Oscar Vega

    2007-01-01

    The Acai fruit is a well know Brazilian seed plant used in large scale as a source of feed stock, specially in the Brazilian North-east region. The Acai oil is use in many purposes from fuel sources to medicine. The scope of this paper is to analyzed the chemical structures modification of the acai oil after the ionizing radiation. The radiation were set in the range of 10 to 25 kGy in the extracted Acai oil. The analyses were made by gas chromatography coupled to mass spectrometry techniques. A GC/MS Shimatzu QP-5000 equipped with 30 meters DB-5 capillary column with internal diameter of 0.25 mm and 0.25 μm film thickness was used. Helium was used as carried gas and gave a column head pressure of 12 p.s.i. (1 p.s.i. = 6894.76 Pa) and an average flux of 1 ml/min. The temperature program for the GC column consisted of a 4-minutes hold at 75 deg C, a 15 deg C /min ramp to 200 deg C, 8 minutes isothermal. 20 deg C/min ramp to 250 deg C, 2 minutes isothermal. The extraction of the fatty acids was based on liquid-liquid method using chloroform as solvent. The chromatograms resulted shows the presences of the oleic acid and others fatty acids identify by the mass spectra library (NIST-92). The ionization radiation deplete the fatty acids presents in the Acai oil. Details on the chemical qualitative analytical is present as well in this work. (author)

  3. A Multi-Model Reduction Technique for Optimization of Coupled Structural-Acoustic Problems

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas

    2016-01-01

    Finite Element models of structural-acoustic coupled systems can become very large for complex structures with multiple connected parts. Optimization of the performance of the structure based on harmonic analysis of the system requires solving the coupled problem iteratively and for several frequ....... Several methods are compared in terms of accuracy and size of the reduced systems for optimization of simple models....

  4. Accurate study of FosPeg® distribution in a mouse model using fluorescence imaging technique and fluorescence white monte carlo simulations

    DEFF Research Database (Denmark)

    Xie, Haiyan; Liu, Haichun; Svenmarker, Pontus

    2010-01-01

    Fluorescence imaging is used for quantitative in vivo assessment of drug concentration. Light attenuation in tissue is compensated for through Monte-Carlo simulations. The intrinsic fluorescence intensity, directly proportional to the drug concentration, could be obtained....

  5. Novel Electro-Optical Coupling Technique for Magnetic Resonance-Compatible Positron Emission Tomography Detectors

    Directory of Open Access Journals (Sweden)

    Peter D. Olcott

    2009-03-01

    Full Text Available A new magnetic resonance imaging (MRI-compatible positron emission tomography (PET detector design is being developed that uses electro-optical coupling to bring the amplitude and arrival time information of high-speed PET detector scintillation pulses out of an MRI system. The electro-optical coupling technology consists of a magnetically insensitive photodetector output signal connected to a nonmagnetic vertical cavity surface emitting laser (VCSEL diode that is coupled to a multimode optical fiber. This scheme essentially acts as an optical wire with no influence on the MRI system. To test the feasibility of this approach, a lutetium-yttrium oxyorthosilicate crystal coupled to a single pixel of a solid-state photomultiplier array was placed in coincidence with a lutetium oxyorthosilicate crystal coupled to a fast photomultiplier tube with both the new nonmagnetic VCSEL coupling and the standard coaxial cable signal transmission scheme. No significant change was observed in 511 keV photopeak energy resolution and coincidence time resolution. This electro-optical coupling technology enables an MRI-compatible PET block detector to have a reduced electromagnetic footprint compared with the signal transmission schemes deployed in the current MRI/PET designs.

  6. Novel electro-optical coupling technique for magnetic resonance-compatible positron emission tomography detectors.

    Science.gov (United States)

    Olcott, Peter D; Peng, Hao; Levin, Craig S

    2009-01-01

    A new magnetic resonance imaging (MRI)-compatible positron emission tomography (PET) detector design is being developed that uses electro-optical coupling to bring the amplitude and arrival time information of high-speed PET detector scintillation pulses out of an MRI system. The electro-optical coupling technology consists of a magnetically insensitive photodetector output signal connected to a nonmagnetic vertical cavity surface emitting laser (VCSEL) diode that is coupled to a multimode optical fiber. This scheme essentially acts as an optical wire with no influence on the MRI system. To test the feasibility of this approach, a lutetium-yttrium oxyorthosilicate crystal coupled to a single pixel of a solid-state photomultiplier array was placed in coincidence with a lutetium oxyorthosilicate crystal coupled to a fast photomultiplier tube with both the new nonmagnetic VCSEL coupling and the standard coaxial cable signal transmission scheme. No significant change was observed in 511 keV photopeak energy resolution and coincidence time resolution. This electro-optical coupling technology enables an MRI-compatible PET block detector to have a reduced electromagnetic footprint compared with the signal transmission schemes deployed in the current MRI/PET designs.

  7. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  8. Pore-scale uncertainty quantification with multilevel Monte Carlo

    KAUST Repository

    Icardi, Matteo; Hoel, Haakon; Long, Quan; Tempone, Raul

    2014-01-01

    . Since there are no generic ways to parametrize the randomness in the porescale structures, Monte Carlo techniques are the most accessible to compute statistics. We propose a multilevel Monte Carlo (MLMC) technique to reduce the computational cost

  9. SU-E-I-42: Normalized Embryo/fetus Doses for Fluoroscopically Guided Pacemaker Implantation Procedures Calculated Using a Monte Carlo Technique

    Energy Technology Data Exchange (ETDEWEB)

    Damilakis, J; Stratakis, J; Solomou, G [University of Crete, Heraklion (Greece)

    2014-06-01

    Purpose: It is well known that pacemaker implantation is sometimes needed in pregnant patients with symptomatic bradycardia. To our knowledge, there is no reported experience regarding radiation doses to the unborn child resulting from fluoroscopy during pacemaker implantation. The purpose of the current study was to develop a method for estimating embryo/fetus dose from fluoroscopically guided pacemaker implantation procedures performed on pregnant patients during all trimesters of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study. Three mathematical anthropomorphic phantoms representing the average pregnant patient at the first, second and third trimesters of gestation were generated using Bodybuilder software (White Rock science, White Rock, NM). The normalized embryo/fetus dose from the posteroanterior (PA), the 30° left-anterior oblique (LAO) and the 30° right-anterior oblique (RAO) projections were calculated for a wide range of kVp (50–120 kVp) and total filtration values (2.5–9.0 mm Al). Results: The results consist of radiation doses normalized to a) entrance skin dose (ESD) and b) dose area product (DAP) so that the dose to the unborn child from any fluoroscopic technique and x-ray device used can be calculated. ESD normalized doses ranged from 0.008 (PA, first trimester) to 2.519 μGy/mGy (RAO, third trimester). DAP normalized doses ranged from 0.051 (PA, first trimester) to 12.852 μGy/Gycm2 (RAO, third trimester). Conclusion: Embryo/fetus doses from fluoroscopically guided pacemaker implantation procedures performed on pregnant patients during all stages of gestation can be estimated using the method developed in this study. This study was supported by the Greek Ministry of Education and Religious Affairs, General Secretariat for Research and Technology, Operational Program ‘Education and Lifelong Learning’, ARISTIA (Research project: CONCERT)

  10. SU-E-I-42: Normalized Embryo/fetus Doses for Fluoroscopically Guided Pacemaker Implantation Procedures Calculated Using a Monte Carlo Technique

    International Nuclear Information System (INIS)

    Damilakis, J; Stratakis, J; Solomou, G

    2014-01-01

    Purpose: It is well known that pacemaker implantation is sometimes needed in pregnant patients with symptomatic bradycardia. To our knowledge, there is no reported experience regarding radiation doses to the unborn child resulting from fluoroscopy during pacemaker implantation. The purpose of the current study was to develop a method for estimating embryo/fetus dose from fluoroscopically guided pacemaker implantation procedures performed on pregnant patients during all trimesters of gestation. Methods: The Monte Carlo N-Particle (MCNP) radiation transport code was employed in this study. Three mathematical anthropomorphic phantoms representing the average pregnant patient at the first, second and third trimesters of gestation were generated using Bodybuilder software (White Rock science, White Rock, NM). The normalized embryo/fetus dose from the posteroanterior (PA), the 30° left-anterior oblique (LAO) and the 30° right-anterior oblique (RAO) projections were calculated for a wide range of kVp (50–120 kVp) and total filtration values (2.5–9.0 mm Al). Results: The results consist of radiation doses normalized to a) entrance skin dose (ESD) and b) dose area product (DAP) so that the dose to the unborn child from any fluoroscopic technique and x-ray device used can be calculated. ESD normalized doses ranged from 0.008 (PA, first trimester) to 2.519 μGy/mGy (RAO, third trimester). DAP normalized doses ranged from 0.051 (PA, first trimester) to 12.852 μGy/Gycm2 (RAO, third trimester). Conclusion: Embryo/fetus doses from fluoroscopically guided pacemaker implantation procedures performed on pregnant patients during all stages of gestation can be estimated using the method developed in this study. This study was supported by the Greek Ministry of Education and Religious Affairs, General Secretariat for Research and Technology, Operational Program ‘Education and Lifelong Learning’, ARISTIA (Research project: CONCERT)

  11. Importance estimation in Monte Carlo modelling of neutron and photon transport

    International Nuclear Information System (INIS)

    Mickael, M.W.

    1992-01-01

    The estimation of neutron and photon importance in a three-dimensional geometry is achieved using a coupled Monte Carlo and diffusion theory calculation. The parameters required for the solution of the multigroup adjoint diffusion equation are estimated from an analog Monte Carlo simulation of the system under investigation. The solution of the adjoint diffusion equation is then used as an estimate of the particle importance in the actual simulation. This approach provides an automated and efficient variance reduction method for Monte Carlo simulations. The technique has been successfully applied to Monte Carlo simulation of neutron and coupled neutron-photon transport in the nuclear well-logging field. The results show that the importance maps obtained in a few minutes of computer time using this technique are in good agreement with Monte Carlo generated importance maps that require prohibitive computing times. The application of this method to Monte Carlo modelling of the response of neutron porosity and pulsed neutron instruments has resulted in major reductions in computation time. (Author)

  12. Generation and performance of a multigroup coupled neutron-gamma cross-section library for deterministic and Monte Carlo borehole logging analysis

    International Nuclear Information System (INIS)

    Kodeli, I.; Aldama, D. L.; De Leege, P. F. A.; Legrady, D.; Hoogenboom, J. E.; Cowan, P.

    2004-01-01

    As part of the IRTMBA (Improved Radiation Transport Modelling for Borehole Applications) project of the EU community's 5. framework program a special purpose multigroup cross-section library was prepared for use in deterministic and Monte Carlo oil well logging particle transport calculations. This library is expected to improve the prediction of the neutron and gamma spectra at the detector positions of the logging tool, and their use for the interpretation of the neutron logging measurements was studied. Preparation and testing of this library is described. (authors)

  13. Geant4-DNA coupling and validation in the GATE Monte Carlo platform for DNA molecules irradiation in a calculation grid environment

    International Nuclear Information System (INIS)

    Pham, Quang Trung

    2014-01-01

    The Monte Carlo simulation methods are successfully being used in various areas of medical physics but also at different scales, for example, from the radiation therapy treatment planning systems to the prediction of the effects of radiation in cancer cells. The Monte Carlo simulation platform GATE based on the Geant4 tool-kit offers features dedicated to simulations in medical physics (nuclear medicine and radiotherapy). For radiobiology applications, the Geant4-DNA physical models are implemented to track particles till very low energy (eV) and are adapted for estimation of micro-dosimetric quantities. In order to implement a multi-scale Monte Carlo platform, we first validated the physical models of Geant4-DNA, and integrated them into GATE. Finally, we validated this implementation in the context of radiation therapy and proton therapy. In order to validate the Geant4-DNA physical models, dose point kernels for monoenergetic electrons (10 keV to 100 keV) were simulated using the physical models of Geant4-DNA and were compared to those simulated with Geant4 Standard physical models and another Monte Carlo code EGSnrc. The range and the stopping powers of electrons (7.4 eV to 1 MeV) and protons (1 keV to 100 MeV) calculated with GATE/Geant4-DNA were then compared with literature. We proposed to simulate with the GATE platform the impact of clinical and preclinical beams on cellular DNA. We modeled a clinical proton beam of 193.1 MeV, 6 MeV clinical electron beam and a X-ray irradiator beam. The beams models were validated by comparing absorbed dose computed and measured in liquid water. Then, the beams were used to calculate the frequency of energy deposits in DNA represented by different geometries. First, the DNA molecule was represented by small cylinders: 2 nm x 2 nm (∼10 bp), 5 nm x 10 nm (nucleosome) and 25 nm x 25 nm (chromatin fiber). All these cylinders were placed randomly in a sphere of liquid water (500 nm radius). Then we reconstructed the DNA

  14. Carlos Romero

    Directory of Open Access Journals (Sweden)

    2008-05-01

    Full Text Available Entrevista (en español Presentación Carlos Romero, politólogo, es profesor-investigador en el Instituto de Estudios Políticos de la Facultad de Ciencias Jurídicas y Políticas de la Universidad Central de Venezuela, en donde se ha desempeñado como coordinador del Doctorado, subdirector y director del Centro de Estudios de Postgrado. Cuenta con ocho libros publicados sobre temas de análisis político y relaciones internacionales, siendo uno de los últimos Jugando con el globo. La política exter...

  15. Complex Interaction Mechanisms between Dislocations and Point Defects Studied in Pure Aluminium by a Two-Wave Acoustic Coupling Technique

    Science.gov (United States)

    Bremnes, O.; Progin, O.; Gremaud, G.; Benoit, W.

    1997-04-01

    Ultrasonic experiments using a two-wave coupling technique were performed on 99.999% pure Al in order to study the interaction mechanisms occurring between dislocations and point defects. The coupling technique consists in measuring the attenuation of ultrasonic waves during low-frequency stress cycles (t). One obtains closed curves () called signatures whose shape and evolution are characteristic of the interaction mechanism controlling the low-frequency dislocation motion. The signatures observed were attributed to the interaction of the dislocations with extrinsic point defects. A new interpretation of the evolution of the signatures measured below 200 K with respect to temperature and stress frequency had to be established: they are linked to depinning of immobile point defects, whereas a thermally activated depinning mechanism does not fit the observations. The signatures measured between 200 and 370 K were interpreted as dragging and depinning of extrinsic point defects which are increasingly mobile with temperature.

  16. Liquid separation techniques coupled with mass spectrometry for chiral analysis of pharmaceuticals compounds and their metabolites in biological fluids.

    OpenAIRE

    Erny, Guillaume L.; Cifuentes, Alejandro

    2006-01-01

    Determination of the chiral composition of drugs is nowadays a key step in order to determine purity, activity, bioavailability, biodegradation, etc, of pharmaceuticals. In this manuscript, works published for the last 5 years on the analysis of chiral drugs by liquid separation techniques coupled with mass spectrometry are reviewed. Namely, chiral analysis of pharmaceuticals including e.g., antiinflammatories, antihypertensives, relaxants, etc, by liquid chromatography-mass spectrometry and ...

  17. Application of a Coupled Eulerian-Lagrangian Technique on Constructability Problems of Site on Very Soft Soil

    Directory of Open Access Journals (Sweden)

    Junyoung Ko

    2017-10-01

    Full Text Available This paper presents the application of the Coupled Eulerian–Lagrangian (CEL technique on the constructability problems of site on very soft soil. The main objective of this study was to investigate the constructability and application of two ground improvement methods, such as the forced replacement method and the deep mixing method. The comparison between the results of CEL analyses and field investigations was performed to verify the CEL modelling. The behavior of very soft soil and constructability with methods can be appropriately investigated using the CEL technique, which would be useful tools for comprehensive reviews in preliminary design.

  18. Application of the Monte Carlo technique to the study of radiation transport in a prompt gamma in vivo neutron activation system

    International Nuclear Information System (INIS)

    Chan, A.A.; Beddoe, A.H.

    1985-01-01

    A Monte Carlo code (MORSE-SGC) from the Radiation Shielding Information Centre at Oak Ridge National Laboratory, USA, has been adapted and used to model radiation transport in the Auckland prompt gamma in vivo neutron activation analysis facility. Preliminary results are presented for the slow neutron flux in an anthropomorphic phantom which are in broad agreement with those obtained by measurement via activation foils. Since experimental optimization is not logistically feasible and since theoretical optimization of neutron activation facilities has not previously been attempted, it is hoped that the Monte Carlo calculations can be used to provide a basis for improved system design

  19. Residual dipolar couplings : a new technique for structure determination of proteins in solution

    NARCIS (Netherlands)

    van Lune, Frouktje Sapke

    2004-01-01

    The aim of the work described in this thesis was to investigate how residual dipolar couplings can be used to resolve or refine the three-dimensional structure of one of the proteins of the phosphoenol-pyruvate phosphotransferase system (PTS), the main transport system for carbohydrates in

  20. Laser vaporization/ionization interface for coupling microscale separation techniques with mass spectrometry

    Science.gov (United States)

    Yeung, E.S.; Chang, Y.C.

    1999-06-29

    The present invention provides a laser-induced vaporization and ionization interface for directly coupling microscale separation processes to a mass spectrometer. Vaporization and ionization of the separated analytes are facilitated by the addition of a light-absorbing component to the separation buffer or solvent. 8 figs.

  1. Dairy goat kids fed liquid diets in substitution of goat milk and slaughtered at different ages: an economic viability analysis using Monte Carlo techniques.

    Science.gov (United States)

    Knupp, L S; Veloso, C M; Marcondes, M I; Silveira, T S; Silva, A L; Souza, N O; Knupp, S N R; Cannas, A

    2016-03-01

    The aim of this study was to analyze the economic viability of producing dairy goat kids fed liquid diets in alternative of goat milk and slaughtered at two different ages. Forty-eight male newborn Saanen and Alpine kids were selected and allocated to four groups using a completely randomized factorial design: goat milk (GM), cow milk (CM), commercial milk replacer (CMR) and fermented cow colostrum (FC). Each group was then divided into two groups: slaughter at 60 and 90 days of age. The animals received Tifton hay and concentrate ad libitum. The values of total costs of liquid and solid feed plus labor, income and average gross margin were calculated. The data were then analyzed using the Monte Carlo techniques with the @Risk 5.5 software, with 1000 iterations of the variables being studied through the model. The kids fed GM and CMR generated negative profitability values when slaughtered at 60 days (US$ -16.4 and US$ -2.17, respectively) and also at 90 days (US$ -30.8 and US$ -0.18, respectively). The risk analysis showed that there is a 98% probability that profitability would be negative when GM is used. In this regard, CM and FC presented low risk when the kids were slaughtered at 60 days (8.5% and 21.2%, respectively) and an even lower risk when animals were slaughtered at 90 days (5.2% and 3.8%, respectively). The kids fed CM and slaughtered at 90 days presented the highest average gross income (US$ 67.88) and also average gross margin (US$ 18.43/animal). For the 60-day rearing regime to be economically viable, the CMR cost should not exceed 11.47% of the animal-selling price. This implies that the replacer cannot cost more than US$ 0.39 and 0.43/kg for the 60- and 90-day feeding regimes, respectively. The sensitivity analysis showed that the variables with the greatest impact on the final model's results were animal selling price, liquid diet cost, final weight at slaughter and labor. In conclusion, the production of male dairy goat kids can be economically

  2. Monte Carlo alpha calculation

    Energy Technology Data Exchange (ETDEWEB)

    Brockway, D.; Soran, P.; Whalen, P.

    1985-01-01

    A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

  3. Monte Carlo burnup codes acceleration using the correlated sampling method

    International Nuclear Information System (INIS)

    Dieudonne, C.

    2013-01-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr

  4. Monte Carlo simulations of neutron scattering instruments

    International Nuclear Information System (INIS)

    Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.

    2001-01-01

    A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)

  5. Coupling of near-field thermal radiative heating and phonon Monte Carlo simulation: Assessment of temperature gradient in n-doped silicon thin film

    International Nuclear Information System (INIS)

    Wong, Basil T.; Francoeur, Mathieu; Bong, Victor N.-S.; Mengüç, M. Pinar

    2014-01-01

    Near-field thermal radiative exchange between two objects is typically more effective than the far-field thermal radiative exchange as the heat flux can increase up to several orders higher in magnitudes due to tunneling of evanescent waves. Such an interesting phenomenon has started to gain its popularity in nanotechnology, especially in nano-gap thermophotovoltaic systems and near-field radiative cooling of micro-/nano-devices. Here, we explored the existence of thermal gradient within an n-doped silicon thin film when it is subjected to intensive near-field thermal radiative heating. The near-field radiative power density deposited within the film is calculated using the Maxwell equations combined with fluctuational electrodynamics. A phonon Monte Carlo simulation is then used to assess the temperature gradient by treating the near-field radiative power density as the heat source. Results indicated that it is improbable to have temperature gradient with the near-field radiative heating as a continuous source unless the source comprises of ultra-short radiative pulses with a strong power density. - Highlights: • This study investigates temperature distribution in an n-doped silicon thin film. • Near-field radiative heating is treated as a volumetric phenomenon. • The temperature gradient is computed using phonon MC simulation. • Temperature of thin film can be approximated as uniform for radiation calculations. • If heat source is a pulsed radiation, a temperature gradient can be established

  6. Modeling of very low frequency (VLF radio wave signal profile due to solar flares using the GEANT4 Monte Carlo simulation coupled with ionospheric chemistry

    Directory of Open Access Journals (Sweden)

    S. Palit

    2013-09-01

    Full Text Available X-ray photons emitted during solar flares cause ionization in the lower ionosphere (~60 to 100 km in excess of what is expected to occur due to a quiet sun. Very low frequency (VLF radio wave signals reflected from the D-region of the ionosphere are affected by this excess ionization. In this paper, we reproduce the deviation in VLF signal strength during solar flares by numerical modeling. We use GEANT4 Monte Carlo simulation code to compute the rate of ionization due to a M-class flare and a X-class flare. The output of the simulation is then used in a simplified ionospheric chemistry model to calculate the time variation of electron density at different altitudes in the D-region of the ionosphere. The resulting electron density variation profile is then self-consistently used in the LWPC code to obtain the time variation of the change in VLF signal. We did the modeling of the VLF signal along the NWC (Australia to IERC/ICSP (India propagation path and compared the results with observations. The agreement is found to be very satisfactory.

  7. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  8. A novel potential/viscous flow coupling technique for computing helicopter flow fields

    Science.gov (United States)

    Summa, J. Michael; Strash, Daniel J.; Yoo, Sungyul

    1993-01-01

    The primary objective of this work was to demonstrate the feasibility of a new potential/viscous flow coupling procedure for reducing computational effort while maintaining solution accuracy. This closed-loop, overlapped velocity-coupling concept has been developed in a new two-dimensional code, ZAP2D (Zonal Aerodynamics Program - 2D), a three-dimensional code for wing analysis, ZAP3D (Zonal Aerodynamics Program - 3D), and a three-dimensional code for isolated helicopter rotors in hover, ZAPR3D (Zonal Aerodynamics Program for Rotors - 3D). Comparisons with large domain ARC3D solutions and with experimental data for a NACA 0012 airfoil have shown that the required domain size can be reduced to a few tenths of a percent chord for the low Mach and low angle of attack cases and to less than 2-5 chords for the high Mach and high angle of attack cases while maintaining solution accuracies to within a few percent. This represents CPU time reductions by a factor of 2-4 compared with ARC2D. The current ZAP3D calculation for a rectangular plan-form wing of aspect ratio 5 with an outer domain radius of about 1.2 chords represents a speed-up in CPU time over the ARC3D large domain calculation by about a factor of 2.5 while maintaining solution accuracies to within a few percent. A ZAPR3D simulation for a two-bladed rotor in hover with a reduced grid domain of about two chord lengths was able to capture the wake effects and compared accurately with the experimental pressure data. Further development is required in order to substantiate the promise of computational improvements due to the ZAPR3D coupling concept.

  9. Determination of rare earth elements in tomato plants by inductively coupled plasma mass spectrometry techniques.

    Science.gov (United States)

    Spalla, S; Baffi, C; Barbante, C; Turetta, C; Turretta, C; Cozzi, G; Beone, G M; Bettinelli, M

    2009-10-30

    In recent years identification of the geographical origin of food has grown more important as consumers have become interested in knowing the provenance of the food that they purchase and eat. Certification schemes and labels have thus been developed to protect consumers and genuine producers from the improper use of popular brand names or renowned geographical origins. As the tomato is one of the major components of what is considered to be the healthy Mediterranean diet, it is important to be able to determine the geographical origin of tomatoes and tomato-based products such as tomato sauce. The aim of this work is to develop an analytical method to determine rare earth elements (RRE) for the control of the geographic origin of tomatoes. The content of REE in tomato plant samples collected from an agricultural area in Piacenza, Italy, was determined, using four different digestion procedures with and without HF. Microwave dissolution with HNO3 + H2O2 proved to be the most suitable digestion procedure. Inductively coupled plasma quadrupole mass spectrometry (ICPQMS) and inductively coupled plasma sector field plasma mass spectrometry (ICPSFMS) instruments, both coupled with a desolvation system, were used to determine the REE in tomato plants in two different laboratories. A matched calibration curve method was used for the quantification of the analytes. The detection limits (MDLs) of the method ranged from 0.03 ng g(-1) for Ho, Tm, and Lu to 2 ng g(-1) for La and Ce. The precision, in terms of relative standard deviation on six replicates, was good, with values ranging, on average, from 6.0% for LREE (light rare earth elements) to 16.5% for HREE (heavy rare earth elements). These detection limits allowed the determination of the very low concentrations of REE present in tomato berries. For the concentrations of REE in tomato plants, the following trend was observed: roots > leaves > stems > berries. Copyright 2009 John Wiley & Sons, Ltd.

  10. Analysis of heat and mass transfers in two-phase flow by coupling optical diagnostic techniques

    International Nuclear Information System (INIS)

    Lemaitre, P.; Porcheron, E.

    2008-01-01

    During the course of a hypothetical accident in a nuclear power plant, spraying might be actuated to reduce static pressure in the containment. To acquire a better understanding of the heat and mass transfers between a spray and the surrounding confined gas, non-intrusive optical measurements have to be carried out simultaneously on both phases. The coupling of global rainbow refractometry with out-of-focus imaging and spontaneous Raman scattering spectroscopy allows us to calculate the local Spalding parameter B M , which is useful in describing heat transfer associated with two-phase flow. (orig.)

  11. Analysis of heat and mass transfers in two-phase flow by coupling optical diagnostic techniques

    Energy Technology Data Exchange (ETDEWEB)

    Lemaitre, P.; Porcheron, E. [Institut de Radioprotection et de Surete Nucleaire, Saclay (France)

    2008-08-15

    During the course of a hypothetical accident in a nuclear power plant, spraying might be actuated to reduce static pressure in the containment. To acquire a better understanding of the heat and mass transfers between a spray and the surrounding confined gas, non-intrusive optical measurements have to be carried out simultaneously on both phases. The coupling of global rainbow refractometry with out-of-focus imaging and spontaneous Raman scattering spectroscopy allows us to calculate the local Spalding parameter B{sub M}, which is useful in describing heat transfer associated with two-phase flow. (orig.)

  12. Atmospheric pressure surface sampling/ionization techniques for direct coupling of planar separations with mass spectrometry.

    Science.gov (United States)

    Pasilis, Sofie P; Van Berkel, Gary J

    2010-06-18

    Planar separations, which include thin layer chromatography and gel electrophoresis, are in widespread use as important and powerful tools for conducting separations of complex mixtures. To increase the utility of planar separations, new methods are needed that allow in situ characterization of the individual components of the separated mixtures. A large number of atmospheric pressure surface sampling and ionization techniques for use with mass spectrometry have emerged in the past several years, and several have been investigated as a means for mass spectrometric read-out of planar separations. In this article, we review the atmospheric pressure surface sampling and ionization techniques that have been used for the read-out of planar separation media. For each technique, we briefly explain the operational basics and discuss the analyte type for which it is appropriate and some specific applications from the literature. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  13. Efficiency assessment of runoff harvesting techniques using a 3D coupled surface-subsurface hydrological model

    International Nuclear Information System (INIS)

    Verbist, K.; Cronelis, W. M.; McLaren, R.; Gabriels, D.; Soto, G.

    2009-01-01

    In arid and semi-arid zones runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Both in literature and in the field, a large variety of runoff collecting systems are found, as well as large variations in design and dimensions. Therefore, detailed measurements were performed on a semi-arid slope in central Chile to allow identification of the effect of a simple water harvesting technique on soil water availability. For this purpose, twenty two TDR-probes were installed and were monitored continuously during and after a simulated rainfall event. These data were used to calibrate the 3D distributed flow model HydroGeoSphere, to assess the runoff components and soil water retention as influenced by the water harvesting technique, both under simulated and natural rainfall conditions. (Author) 6 refs.

  14. Experimental study of laser ablation as sample introduction technique for inductively coupled plasma-mass spectrometry

    International Nuclear Information System (INIS)

    Van Winckel, S.

    2001-01-01

    The contribution consists of an abstract of a PhD thesis. In the PhD study, several complementary applications of laser-ablation were investigated in order to characterise experimentally laser ablation (LA) as a sample introduction technique for ICP-MS. Three applications of LA as a sample introduction technique are discussed: (1) the microchemical analysis of the patina of weathered marble; (2) the possibility to measure isotope ratios (in particular Pb isotope ratios in archaeological bronze artefacts); and (3) the determination of Si in Al as part of a dosimetric study of the BR2 reactor vessel

  15. Prospective activity levels in the regions of the UKCS under different oil and gas prices: an application of the Monte Carlo technique

    International Nuclear Information System (INIS)

    Kemp, A.G.; Stephen, L.

    1999-01-01

    This paper summarises the results of a study using the Monte Carlo simulation to examine activity levels in the regions of the UK continental shelf under different oil and gas prices. Details of the methodology, data, and assumptions used are given, and the production of oil and gas, new field investment, aggregate operating expenditures, and gross revenues under different price scenarios are addressed. The total potential oil and gas production under the different price scenarios for 2000-2013 are plotted. (UK)

  16. Liquid separation techniques coupled with mass spectrometry for chiral analysis of pharmaceuticals compounds and their metabolites in biological fluids.

    Science.gov (United States)

    Erny, G L; Cifuentes, A

    2006-02-24

    Determination of the chiral composition of drugs is nowadays a key step in order to determine purity, activity, bioavailability, biodegradation, etc., of pharmaceuticals. In this article, works published for the last 5 years on the analysis of chiral drugs by liquid separation techniques coupled with mass spectrometry are reviewed. Namely, chiral analysis of pharmaceuticals including, e.g., antiinflammatories, antihypertensives, relaxants, etc., by liquid chromatography-mass spectrometry and capillary electrophoresis-mass spectrometry are included. The importance and interest of the analysis of the enantiomers of the active compound and its metabolites in different biological fluids (plasma, urine, cerebrospinal fluid, etc.) are also discussed.

  17. Characteristics of miniature electronic brachytherapy x-ray sources based on TG-43U1 formalism using Monte Carlo simulation techniques

    International Nuclear Information System (INIS)

    Safigholi, Habib; Faghihi, Reza; Jashni, Somaye Karimi; Meigooni, Ali S.

    2012-01-01

    Purpose: The goal of this study is to determine a method for Monte Carlo (MC) characterization of the miniature electronic brachytherapy x-ray sources (MEBXS) and to set dosimetric parameters according to TG-43U1 formalism. TG-43U1 parameters were used to get optimal designs of MEBXS. Parameters that affect the dose distribution such as anode shapes, target thickness, target angles, and electron beam source characteristics were evaluated. Optimized MEBXS designs were obtained and used to determine radial dose functions and 2D anisotropy functions in the electron energy range of 25-80 keV. Methods: Tungsten anode material was considered in two different geometries, hemispherical and conical-hemisphere. These configurations were analyzed by the 4C MC code with several different optimization techniques. The first optimization compared target thickness layers versus electron energy. These optimized thicknesses were compared with published results by Ihsan et al.[Nucl. Instrum. Methods Phys. Res. B 264, 371-377 (2007)]. The second optimization evaluated electron source characteristics by changing the cathode shapes and electron energies. Electron sources studied included; (1) point sources, (2) uniform cylinders, and (3) nonuniform cylindrical shell geometries. The third optimization was used to assess the apex angle of the conical-hemisphere target. The goal of these optimizations was to produce 2D-dose anisotropy functions closer to unity. An overall optimized MEBXS was developed from this analysis. The results obtained from this model were compared to known characteristics of HDR 125 I, LDR 103 Pd, and Xoft Axxent electronic brachytherapy source (XAEBS) [Med. Phys. 33, 4020-4032 (2006)]. Results: The optimized anode thicknesses as a function of electron energy is fitted by the linear equation Y (μm) = 0.0459X (keV)-0.7342. The optimized electron source geometry is obtained for a disk-shaped parallel beam (uniform cylinder) with 0.9 mm radius. The TG-43 distribution

  18. Characteristics of miniature electronic brachytherapy x-ray sources based on TG-43U1 formalism using Monte Carlo simulation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Safigholi, Habib; Faghihi, Reza; Jashni, Somaye Karimi; Meigooni, Ali S. [Faculty of Engineering, Science and Research Branch, Islamic Azad University, Fars, 73481-13111, Persepolis (Iran, Islamic Republic of); Department of Nuclear Engineering and Radiation Research Center, Shiraz University, 71936-16548, Shiraz (Iran, Islamic Republic of); Shiraz University of Medical Sciences, 71348-14336, Shiraz (Iran, Islamic Republic of); Department of Radiation therapy, Comprehensive Cancer Center of Nevada, 3730 South Eastern Avenue, Las Vegas, Nevada 89169 (United States)

    2012-04-15

    Purpose: The goal of this study is to determine a method for Monte Carlo (MC) characterization of the miniature electronic brachytherapy x-ray sources (MEBXS) and to set dosimetric parameters according to TG-43U1 formalism. TG-43U1 parameters were used to get optimal designs of MEBXS. Parameters that affect the dose distribution such as anode shapes, target thickness, target angles, and electron beam source characteristics were evaluated. Optimized MEBXS designs were obtained and used to determine radial dose functions and 2D anisotropy functions in the electron energy range of 25-80 keV. Methods: Tungsten anode material was considered in two different geometries, hemispherical and conical-hemisphere. These configurations were analyzed by the 4C MC code with several different optimization techniques. The first optimization compared target thickness layers versus electron energy. These optimized thicknesses were compared with published results by Ihsan et al.[Nucl. Instrum. Methods Phys. Res. B 264, 371-377 (2007)]. The second optimization evaluated electron source characteristics by changing the cathode shapes and electron energies. Electron sources studied included; (1) point sources, (2) uniform cylinders, and (3) nonuniform cylindrical shell geometries. The third optimization was used to assess the apex angle of the conical-hemisphere target. The goal of these optimizations was to produce 2D-dose anisotropy functions closer to unity. An overall optimized MEBXS was developed from this analysis. The results obtained from this model were compared to known characteristics of HDR {sup 125}I, LDR {sup 103}Pd, and Xoft Axxent electronic brachytherapy source (XAEBS) [Med. Phys. 33, 4020-4032 (2006)]. Results: The optimized anode thicknesses as a function of electron energy is fitted by the linear equation Y ({mu}m) = 0.0459X (keV)-0.7342. The optimized electron source geometry is obtained for a disk-shaped parallel beam (uniform cylinder) with 0.9 mm radius. The TG-43

  19. Laser ablation inductively coupled plasma mass spectrometry. An alternative technique for monitoring 90Sr

    International Nuclear Information System (INIS)

    TsingHai Wang; Yan-Chen Lai; Yi-Kong Hsieh; Chu-Fang Wang

    2017-01-01

    Developing a rapid detection method for monitoring released 90 Sr remains a challenge to analytical chemists, particularly considering its low concentration and significant interferences in environmental samples. We proposed a concept as an alternative to detect 90 Sr on the surface of fish scales using laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS). The high affinity of fish scales to Sr is capable of preconcentrating 90 Sr that minimizes isobaric interferences from 90 Zr + or 89 YH + , while tailing effect by abundant 88 Sr can be effectively reduced by adjusting the forward power of ICP-MS component. Adopting dried droplets of internal standards further allows a semiquantification of 90 Sr content on the surface of fish scales, which also arises an opportunity to monitoring the bioaccumulation of 90 Sr after Fukushima Daiichi nuclear disaster. (author)

  20. Hyperspectral fluorescence imaging coupled with multivariate image analysis techniques for contaminant screening of leafy greens

    Science.gov (United States)

    Everard, Colm D.; Kim, Moon S.; Lee, Hoyoung

    2014-05-01

    The production of contaminant free fresh fruit and vegetables is needed to reduce foodborne illnesses and related costs. Leafy greens grown in the field can be susceptible to fecal matter contamination from uncontrolled livestock and wild animals entering the field. Pathogenic bacteria can be transferred via fecal matter and several outbreaks of E.coli O157:H7 have been associated with the consumption of leafy greens. This study examines the use of hyperspectral fluorescence imaging coupled with multivariate image analysis to detect fecal contamination on Spinach leaves (Spinacia oleracea). Hyperspectral fluorescence images from 464 to 800 nm were captured; ultraviolet excitation was supplied by two LED-based line light sources at 370 nm. Key wavelengths and algorithms useful for a contaminant screening optical imaging device were identified and developed, respectively. A non-invasive screening device has the potential to reduce the harmful consequences of foodborne illnesses.

  1. Coupled neutronic-thermal-hydraulics analysis in a coolant subchannel of a PWR using CFD techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ribeiro, Felipe P.; Su, Jian, E-mail: sujian@nuclear.ufrj.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear

    2017-07-01

    The high capacity of Computational Fluid Dynamics code to predict multi-dimensional thermal-hydraulics behaviour and the increased availability of capable computer systems are making that method a good tool to simulate phenomena of thermal-hydraulics nature in nuclear reactors. However, since there are no neutron kinetics models available in commercial CFD codes to the present day, the application of CFD in the nuclear reactor safety analyses is still limited. The present work proposes the implementation of the point kinetics model (PKM) in ANSYS - Fluent to predict the neutronic behaviour in a Westinghouse Sequoyah nuclear reactor, coupling with the phenomena of heat conduction in the rod and thermal-hydraulics in the cooling fluid, via the reactivity feedback. Firstly, a mesh convergence and turbulence model study was performed, using the Reynolds-Average Navier-Stokes method, with square arrayed rod bundle featuring pitch to diameter ratio of 1:32. Secondly, simulations using the k-! SST turbulence model were performed with an axial distribution of the power generation in the fuel to analyse the heat transfer through the gap and cladding, and its in fluence on the thermal-hydraulics behaviour of the cooling fluid. The wall shear stress distribution for the centre-line rods and the dimensionless velocity were evaluated to validate the model, as well as the in fluence of the mass flow rate variation on the friction factor. The coupled model enabled to perform a dynamic analysis of the nuclear reactor during events of insertion of reactivity and shutdown of primary coolant pumps. (author)

  2. Elastography as a hybrid imaging technique : coupling with photoacoustics and quantitative imaging

    International Nuclear Information System (INIS)

    Widlak, T.G.

    2015-01-01

    While classical imaging methods, such as ultrasound, computed tomography or magnetic resonance imaging, are well-known and mathematically understood, a host of physiological parameters relevant for diagnostic purposes cannot be obtained by them. This gap is recently being closed by the introduction of hybrid, or coupled-physics imaging methods. They connect more then one physical modality, and aim to provide quantitative information on optical, electrical or mechanical parameters with high resolution. Central to this thesis is the mechanical contrast of elastic tissue, especially Young’s modulus or the shear modulus. Different methods of qualitative elastography provide interior information of the mechanical displacement field. From this interior data the nonlinear inverse problem of quantitative elastography aims to reconstruct the shear modulus. In this thesis, the elastography problem is seen from a hybrid imaging perspective; methods from coupled-physics inspired literature and regularization theory have been employed to recover displacement and shear modulus information. The overdetermined systems approach by G. Bal is applied to the quantitative problem, and ellipticity criteria are deduced, for one and several measurements, as well as injectivity results. Together with the geometric theory of G. Chavent, the results are used for analyzing convergence of Tikhonov regularization. Also, a convergence analysis for the Levenberg Marquardt method is provided. As a second mainstream project in this thesis, elastography imaging is developed for extracting displacements from photoacoustic images. A novel method is provided for texturizing the images, and the optical flow problem for motion estimation is shown to be regularized with this texture generation. The results are tested in cooperation with the Medical University Vienna, and the methods for quantitative determination of the shear modulus evaluated in first experiments. In summary, the overdetermined systems

  3. [Study of generational risk in deafness inflicted couples using deafness gene microarray technique].

    Science.gov (United States)

    Wang, Ping; Zhao, Jia; Yu, Shu-yuan; Jin, Peng; Zhu, Wei; DU, Bo

    2011-06-01

    To explored the significance of screening the gene mutations of deafness related in deaf-mute (deaf & dumb) family using DNA microarray. Total of 52 couples of deaf-mute were recruited from Changchun deaf-mute community. With an average age of (58.3 ± 6.7) years old (x(-) ± s). Blood samples were obtained with informed consent. Their genomic DNA was extracted from peripheral blood and PCR was performed. Nine of hot spot mutations in four most common deafness pathologic gene were examined with the DNA microarray, including GJB2, GJB3, PDS and mtDNA 12S rRNA genes. At the same time, the results were verified with the traditional methods of sequencing. Fifty of normal people served as a control group. All patients were diagnosed non-syndromic sensorineural hearing loss by subjective pure tone audiometry. Thirty-two of 104 cases appeared GJB2 gene mutation (30.7%), the mutation sites included 35delG, 176del16, 235delC and 299delAT. Eighteen of 32 cases of GJB2 mutations were 235delC (59.1%). Seven of 104 cases appeared SLC26A4 gene IVS7-2 A > G mutation. Questionnaire survey and gene diagnosis revealed that four of 52 families have deaf offspring (7.6%). When a couple carries the same gene mutation, the risk of their children deafness was 100%. The results were confirmed with the traditional methods of sequencing. There is a high risk of deafness if a deaf-mute family is planning to have a new baby. It is very important and helpful to avoid deaf newborns again in deaf-mute family by DNA microarray.

  4. Design of Large Wind Turbines using Fluid-Structure Coupling Technique

    DEFF Research Database (Denmark)

    Sessarego, Matias

    Aerodynamic and structural dynamic performance analysis of modern wind turbines are routinely carried out in the wind energy field using computational tools known as aero-elastic codes. Most aero-elastic codes use the blade element momentum (BEM) technique to model the rotor aerodynamics......-dimensional viscous-inviscid interactive method, MIRAS, with the dynamics model used in the aero-elastic code FLEX5. Following the development of MIRAS-FLEX, a surrogate optimization methodology using MIRAS alone has been developed for the aerodynamic design of wind-turbine rotors. Designing a rotor using...... a computationally expensive MIRAS instead of an inexpensive BEM code represents a challenge, which is resolved by using the proposed surrogate-based approach. The approach is unique because most aerodynamic wind-turbine rotor design codes use the more common and inexpensive BEM technique. As a verification case...

  5. The coupling of high-speed high resolution experimental data and LES through data assimilation techniques

    Science.gov (United States)

    Harris, S.; Labahn, J. W.; Frank, J. H.; Ihme, M.

    2017-11-01

    Data assimilation techniques can be integrated with time-resolved numerical simulations to improve predictions of transient phenomena. In this study, optimal interpolation and nudging are employed for assimilating high-speed high-resolution measurements obtained for an inert jet into high-fidelity large-eddy simulations. This experimental data set was chosen as it provides both high spacial and temporal resolution for the three-component velocity field in the shear layer of the jet. Our first objective is to investigate the impact that data assimilation has on the resulting flow field for this inert jet. This is accomplished by determining the region influenced by the data assimilation and corresponding effect on the instantaneous flow structures. The second objective is to determine optimal weightings for two data assimilation techniques. The third objective is to investigate how the frequency at which the data is assimilated affects the overall predictions. Graduate Research Assistant, Department of Mechanical Engineering.

  6. Prospect on general software of Monte Carlo method

    International Nuclear Information System (INIS)

    Pei Lucheng

    1992-01-01

    This is a short paper on the prospect of Monte Carlo general software. The content consists of cluster sampling method, zero variance technique, self-improved method, and vectorized Monte Carlo method

  7. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  8. Improving Air Quality (and Weather) Predictions using Advanced Data Assimilation Techniques Applied to Coupled Models during KORUS-AQ

    Science.gov (United States)

    Carmichael, G. R.; Saide, P. E.; Gao, M.; Streets, D. G.; Kim, J.; Woo, J. H.

    2017-12-01

    Ambient aerosols are important air pollutants with direct impacts on human health and on the Earth's weather and climate systems through their interactions with radiation and clouds. Their role is dependent on their distributions of size, number, phase and composition, which vary significantly in space and time. There remain large uncertainties in simulated aerosol distributions due to uncertainties in emission estimates and in chemical and physical processes associated with their formation and removal. These uncertainties lead to large uncertainties in weather and air quality predictions and in estimates of health and climate change impacts. Despite these uncertainties and challenges, regional-scale coupled chemistry-meteorological models such as WRF-Chem have significant capabilities in predicting aerosol distributions and explaining aerosol-weather interactions. We explore the hypothesis that new advances in on-line, coupled atmospheric chemistry/meteorological models, and new emission inversion and data assimilation techniques applicable to such coupled models, can be applied in innovative ways using current and evolving observation systems to improve predictions of aerosol distributions at regional scales. We investigate the impacts of assimilating AOD from geostationary satellite (GOCI) and surface PM2.5 measurements on predictions of AOD and PM in Korea during KORUS-AQ through a series of experiments. The results suggest assimilating datasets from multiple platforms can improve the predictions of aerosol temporal and spatial distributions.

  9. ARCHERRT - a GPU-based and photon-electron coupled Monte Carlo dose computing engine for radiation therapy: software development and application to helical tomotherapy.

    Science.gov (United States)

    Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X George

    2014-07-01

    Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified Woodcock tracking algorithm

  10. ARCHERRT – A GPU-based and photon-electron coupled Monte Carlo dose computing engine for radiation therapy: Software development and application to helical tomotherapy

    Science.gov (United States)

    Su, Lin; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George

    2014-01-01

    Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHERRT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head & neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHERRT. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHERRT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHERRT agree well with DOSXYZnrc. For clinical cases, results from ARCHERRT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head & neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU, modified

  11. An alternative technique for simulating volumetric cylindrical sources in the Morse code utilization

    International Nuclear Information System (INIS)

    Vieira, W.J.; Mendonca, A.G.

    1985-01-01

    In the solution of deep-penetration problems using the Monte Carlo method, calculation techniques and strategies are used in order to increase the particle population in the regions of interest. A common procedure is the coupling of bidimensional calculations, with (r,z) discrete ordinates transformed into source data, and tridimensional Monte Carlo calculations. An alternative technique for this procedure is presented. This alternative proved effective when applied to a sample problem. (F.E.) [pt

  12. Coupling an analytical description of anti-scatter grids with simulation software of radiographic systems using Monte Carlo code; Couplage d'une methode de description analytique de grilles anti diffusantes avec un logiciel de simulation de systemes radiographiques base sur un code Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Rinkel, J.; Dinten, J.M.; Tabary, J

    2004-07-01

    The use of focused anti-scatter grids on digital radiographic systems with two-dimensional detectors produces acquisitions with a decreased scatter to primary ratio and thus improved contrast and resolution. Simulation software is of great interest in optimizing grid configuration according to a specific application. Classical simulators are based on complete detailed geometric descriptions of the grid. They are accurate but very time consuming since they use Monte Carlo code to simulate scatter within the high-frequency grids. We propose a new practical method which couples an analytical simulation of the grid interaction with a radiographic system simulation program. First, a two dimensional matrix of probability depending on the grid is created offline, in which the first dimension represents the angle of impact with respect to the normal to the grid lines and the other the energy of the photon. This matrix of probability is then used by the Monte Carlo simulation software in order to provide the final scattered flux image. To evaluate the gain of CPU time, we define the increasing factor as the increase of CPU time of the simulation with as opposed to without the grid. Increasing factors were calculated with the new model and with classical methods representing the grid with its CAD model as part of the object. With the new method, increasing factors are shorter by one to two orders of magnitude compared with the second one. These results were obtained with a difference in calculated scatter of less than five percent between the new and the classical method. (authors)

  13. Monte Carlo theory and practice

    International Nuclear Information System (INIS)

    James, F.

    1987-01-01

    Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem

  14. Applications of Monte Carlo method in Medical Physics

    International Nuclear Information System (INIS)

    Diez Rios, A.; Labajos, M.

    1989-01-01

    The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)

  15. Techniques to reduce memory requirements for coupled photon-electron transport

    International Nuclear Information System (INIS)

    Turcksin, Bruno; Ragusa, Jean; Morel, Jim

    2011-01-01

    In this work, we present two methods to decrease memory needs while solving the photon- electron transport equation. The coupled transport of electrons and photons is of importance in radiotherapy because it describes the interactions of X-rays with matter. One of the issues of discretized electron transport is that the electron scattering is highly forward peaked. A common approximation is to represent the peak in the scattering cross section by a Dirac distribution. This is convenient, but the integration over all angles of this distribution requires the use of Galerkin quadratures. By construction these quadratures impose that the number of flux moments be equal to the number of directions (number of angular fluxes), which is very demanding in terms of memory. In this study, we show that even if the number of moments is not as large as the number of directions, an accurate solution can be obtained when using Galerkin quadratures. Another method to decrease the memory needs involves choosing an appropriate reordering of the energy groups. We show in this paper that an appropriate alternation of photons/electrons groups allows to rewrite one transport problem of n groups as gcd successive transport problems of n/gcd groups where gcd is the greatest common divisor between the number of photon groups and the number of electron groups. (author)

  16. Pipe Wall Thickness Monitoring Using Dry-Coupled Ultrasonic Waveguide Technique

    International Nuclear Information System (INIS)

    Cheong, Yong Moo; Kim, Ha Nam; Kim, Hong Pyo

    2012-01-01

    In order to monitor a corrosion or FAC (Flow Accelerated Corrosion) in a pipe, there is a need to measure pipe wall thickness at high temperature. Ultrasonic thickness gauging is the most commonly used non-destructive testing technique for wall thickness measurement. However, current commonly available ultrasonic transducers cannot withstand high temperatures, such as above 200 .deg. C. It is therefore necessary to carry out manual measurements during plant shutdowns. The current method thus reveals several disadvantages: inspection have to be performed during shutdowns with the possible consequences of prolonging down time and increasing production losses, insulation has to be removed and replaced for each manual measurement, and scaffolding has to be installed to inaccessible areas, resulting in considerable cost for interventions. It has been suggested that a structural health monitoring approach with permanently installed ultrasonic thickness gauges could have substantial benefits over current practices. The main reasons why conventional piezoelectric ultrasonic transducers cannot be used at high temperatures are that the piezo-ceramic becomes depolarized at temperature above the Curie temperature and because differential thermal expansion of the substrate, couplant, and piezoelectric materials cause failure. In this paper, a shear horizontal waveguide technique for wall thickness monitoring at high temperature is investigated. Two different designs for contact to strip waveguide are shown and the quality of output signal is compared and reviewed. After a success of acquiring high quality ultrasonic signal, experiment on the wall thickness monitoring at high temperature is planned

  17. ARCHERRT – A GPU-based and photon-electron coupled Monte Carlo dose computing engine for radiation therapy: Software development and application to helical tomotherapy

    International Nuclear Information System (INIS)

    Su, Lin; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George; Yang, Youming; Bednarz, Bryan; Sterpin, Edmond

    2014-01-01

    Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHER RT is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head and neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHER RT . Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHER RT and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHER RT agree well with DOSXYZnrc. For clinical cases, results from ARCHER RT are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head and neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to specific architecture of GPU

  18. Sardine (Sardina pilchardus) larval dispersal in the Iberian upwelling system, using coupled biophysical techniques

    Science.gov (United States)

    Santos, A. M. P.; Nieblas, A.-E.; Verley, P.; Teles-Machado, A.; Bonhommeau, S.; Lett, C.; Garrido, S.; Peliz, A.

    2018-03-01

    The European sardine (Sardina pilchardus) is the most important small pelagic fishery of the Western Iberia Upwelling Ecosystem (WIUE). Recently, recruitment of this species has declined due to changing environmental conditions. Furthermore, controversies exist regarding its population structure with barriers thought to exist between the Atlantic-Iberian Peninsula, Northern Africa, and the Mediterranean. Few studies have investigated the transport and dispersal of sardine eggs and larvae off Iberia and the subsequent impact on larval recruitment variability. Here, we examine these issues using a Regional Ocean Modeling System climatology (1989-2008) coupled to the Lagrangian transport model, Ichthyop. Using biological parameters from the literature, we conduct simulations that investigate the effects of spawning patchiness, diel vertical migration behaviors, and egg buoyancy on the transport and recruitment of virtual sardine ichthyoplankton on the continental shelf. We find that release area, release depth, and month of release all significantly affect recruitment. Patchiness has no effect and diel vertical migration causes slightly lower recruitment. Egg buoyancy effects are significant and act similarly to depth of release. As with other studies, we find that recruitment peaks vary by latitude, explained here by the seasonal variability of offshore transport. We find weak, continuous alongshore transport between release areas, though a large proportion of simulated ichthyoplankton transport north to the Cantabrian coast (up to 27%). We also show low level transport into Morocco (up to 1%) and the Mediterranean (up to 8%). The high proportion of local retention and low but consistent alongshore transport supports the idea of a series of metapopulations along this coast.

  19. Quantum Monte Carlo studies in Hamiltonian lattice gauge theory

    International Nuclear Information System (INIS)

    Hamer, C.J.; Samaras, M.; Bursill, R.J.

    2000-01-01

    Full text: The application of Monte Carlo methods to the 'Hamiltonian' formulation of lattice gauge theory has been somewhat neglected, and lags at least ten years behind the classical Monte Carlo simulations of Euclidean lattice gauge theory. We have applied a Green's Function Monte Carlo algorithm to lattice Yang-Mills theories in the Hamiltonian formulation, combined with a 'forward-walking' technique to estimate expectation values and correlation functions. In this approach, one represents the wave function in configuration space by a discrete ensemble of random walkers, and application of the time development operator is simulated by a diffusion and branching process. The approach has been used to estimate the ground-state energy and Wilson loop values in the U(1) theory in (2+1)D, and the SU(3) Yang-Mills theory in (3+1)D. The finite-size scaling behaviour has been explored, and agrees with the predictions of effective Lagrangian theory, and weak-coupling expansions. Crude estimates of the string tension are derived, which agree with previous results at intermediate couplings; but more accurate results for larger loops will be required to establish scaling behaviour at weak couplings. A drawback to this method is that it is necessary to introduce a 'trial' or 'guiding wave function' to guide the walkers towards the most probable regions of configuration space, in order to achieve convergence and accuracy. The 'forward-walking' estimates should be independent of this guidance, but in fact for the SU(3) case they turn out to be sensitive to the choice of trial wave function. It would be preferable to use some sort of Metropolis algorithm instead to produce a correct distribution of walkers: this may point in the direction of a Path Integral Monte Carlo approach

  20. Contribution of analytical techniques coupled to the knowledge of the uranium speciation in natural conditions

    International Nuclear Information System (INIS)

    Petit, J.

    2009-06-01

    To understand the transport mechanisms and the radionuclides behaviour in the bio-geosphere is necessary to evaluate healthy and environmental risks of nuclear industry. These mechanisms are monitored by radioelements speciation, namely the distribution between their different physico-chemical forms in the environment. From this perspective, this PhD thesis deals with uranium speciation in a natural background. A detailed summary of uranium biogeochemistry has been written, which enables to restrict the PhD issue to uranium complexation with oxalic acid, a hydrophilic organic acid with good binding properties, ubiquitous in soil waters. Analytical conditions have been established by means of speciation diagrams. The speciation diagrams building by means of literature stability constants has allowed to define the analytical conditions of complex formation. The chosen analytical technique is the hyphenation of a separative technique (liquid chromatography LC or capillary electrophoresis CE) with mass spectrometry (ICPMS). The studied complexes presence in the synthetic samples has been confirmed with UV/visible spectrophotometry. LC-ICPMS analyses have proved the lability of the uranyl-organic acid complexes, namely their tendency to dissociate during analysis, which prevents from studying uranium speciation. CE-ICPMS study of labile complexes from a metal-ligand system has been made possible by employing affinity capillary electrophoresis, which enables to determine stability constants and electrophoretic mobilities. This PhD thesis has allowed to compare the different mathematical treatments of binding isotherm and to take into account ionic strength and real ligand concentration. Affinity CE has been applied successfully to lanthanum-oxalate (model system) and uranium-oxalate systems. The obtained results have been applied to a real system (situated in Le Bouchet). This shows the contribution of the developed method to the modelling of uranium speciation. (author)

  1. Using adaptive neuro-fuzzy inference system technique for crosstalk correction in simultaneous {sup 99m}Tc/{sup 201}Tl SPECT imaging: A Monte Carlo simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Heidary, Saeed, E-mail: saeedheidary@aut.ac.ir; Setayeshi, Saeed, E-mail: setayesh@aut.ac.ir

    2015-01-11

    This work presents a simulation based study by Monte Carlo which uses two adaptive neuro-fuzzy inference systems (ANFIS) for cross talk compensation of simultaneous {sup 99m}Tc/{sup 201}Tl dual-radioisotope SPECT imaging. We have compared two neuro-fuzzy systems based on fuzzy c-means (FCM) and subtractive (SUB) clustering. Our approach incorporates eight energy-windows image acquisition from 28 keV to 156 keV and two main photo peaks of {sup 201}Tl (77±10% keV) and {sup 99m}Tc (140±10% keV). The Geant4 application in emission tomography (GATE) is used as a Monte Carlo simulator for three cylindrical and a NURBS Based Cardiac Torso (NCAT) phantom study. Three separate acquisitions including two single-isotopes and one dual isotope were performed in this study. Cross talk and scatter corrected projections are reconstructed by an iterative ordered subsets expectation maximization (OSEM) algorithm which models the non-uniform attenuation in the projection/back-projection. ANFIS-FCM/SUB structures are tuned to create three to sixteen fuzzy rules for modeling the photon cross-talk of the two radioisotopes. Applying seven to nine fuzzy rules leads to a total improvement of the contrast and the bias comparatively. It is found that there is an out performance for the ANFIS-FCM due to its acceleration and accurate results.

  2. Direct analysis of ultra-trace semiconductor gas by inductively coupled plasma mass spectrometry coupled with gas to particle conversion-gas exchange technique.

    Science.gov (United States)

    Ohata, Masaki; Sakurai, Hiromu; Nishiguchi, Kohei; Utani, Keisuke; Günther, Detlef

    2015-09-03

    An inductively coupled plasma mass spectrometry (ICPMS) coupled with gas to particle conversion-gas exchange technique was applied to the direct analysis of ultra-trace semiconductor gas in ambient air. The ultra-trace semiconductor gases such as arsine (AsH3) and phosphine (PH3) were converted to particles by reaction with ozone (O3) and ammonia (NH3) gases within a gas to particle conversion device (GPD). The converted particles were directly introduced and measured by ICPMS through a gas exchange device (GED), which could penetrate the particles as well as exchange to Ar from either non-reacted gases such as an air or remaining gases of O3 and NH3. The particle size distribution of converted particles was measured by scanning mobility particle sizer (SMPS) and the results supported the elucidation of particle agglomeration between the particle converted from semiconductor gas and the particle of ammonium nitrate (NH4NO3) which was produced as major particle in GPD. Stable time-resolved signals from AsH3 and PH3 in air were obtained by GPD-GED-ICPMS with continuous gas introduction; however, the slightly larger fluctuation, which could be due to the ionization fluctuation of particles in ICP, was observed compared to that of metal carbonyl gas in Ar introduced directly into ICPMS. The linear regression lines were obtained and the limits of detection (LODs) of 1.5 pL L(-1) and 2.4 nL L(-1) for AsH3 and PH3, respectively, were estimated. Since these LODs revealed sufficiently lower values than the measurement concentrations required from semiconductor industry such as 0.5 nL L(-1) and 30 nL L(-1) for AsH3 and PH3, respectively, the GPD-GED-ICPMS could be useful for direct and high sensitive analysis of ultra-trace semiconductor gas in air. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Rapid nuclear forensics analysis via laser based microphotonic techniques coupled with chemometrics

    International Nuclear Information System (INIS)

    Bhatta, B.; Kalambuka, H.A.; Dehayem-Kamadjeu, A.

    2017-01-01

    Nuclear forensics (NF) is an important tool for analysis and attribution of nuclear and radiological materials (NRM) in support of nuclear security. The critical challenge in NF currently is the lack of suitable microanalytical methodologies for direct, rapid and minimally-invasive detection and quantification of NF signatures. Microphotonic techniques can achieve this task particularly when the materials are of limited size and under concealed condition. The purpose of this paper is to demonstrate the combined potential of chemometrics enabled LIBS and laser Raman spectromicroscopy (LRS) for rapid NF analysis and attribution. Using LIBS, uranium lines at 385.464 nm, 385.957 nm and 386.592 nm were identified as NF signatures in uranium ore surrogates. A multivariate calibration strategy using artificial neural network was developed for quantification of trace uranium. Principal component analysis (PCA) of LIBS spectra achieved source attribution of the ores. LRS studies on UCl3, UO3(NO3)2.6H2O, UO2SO4.3H2O and UO3 in pellet state identified the bands associated with different uranium molecules as varying in the range of (840 to 867) ± 15 cm-1. Using this signature, we have demonstrated spectral imaging of uranium under concealed conditions (author)

  4. Characterization of Old Nuclear Waste Packages Coupling Photon Activation Analysis and Complementary Non-Destructive Techniques

    International Nuclear Information System (INIS)

    Carrel, Frederick; Coulon, Romain; Laine, Frederic; Normand, Stephane; Sari, Adrien; Charbonnier, Bruno; Salmon, Corine

    2013-06-01

    Radiological characterization of nuclear waste packages is an industrial issue in order to select the best mode of storage. The characterization becomes crucial particularly for waste packages produced at the beginning of the French nuclear industry. For the latter, available information is often incomplete and some key parameters are sometimes missing (content of the package, alpha-activity, fissile mass...) In this case, the use of non-destructive methods, both passive and active, is an appropriate solution to characterize nuclear waste packages and to obtain all the information of interest. In this article, we present the results of a complete characterization carried out on the TE 1060 block, which is a nuclear waste package produced during the 1960's in Saclay. This characterization is part of the DEMSAC (Dismantling of Saclay's facilities) project (ICPE part). It has been carried out in the SAPHIR facility, located in Saclay and housing a linear electron accelerator. This work enables to show the great interest of active methods (photon activation analysis and high-energy imaging) as soon as passive techniques encounter severe limitations. (authors)

  5. Inductively coupled plasma-atomic emission spectrometry: analytical assessment of the technique at the beginning of the 90's

    International Nuclear Information System (INIS)

    Sanz-Medel, A.

    1991-01-01

    The main application of the inductively coupled plasma (ICP) today is in atomic emission spectroscopy (AES), as an excitation spectrochemical source, although uses of an ICP for fluorescence as just an atomizer, and specially for mass spectrometry, as an ionization source, are rocketing in the last few years. Since its inception, only a quarter of a century ago, ICP-AES has rapidly evolved to one of the preferred routine analytical techniques for convenient determination of many elements with high speed, at low levels and in the most varied samples. Perhaps its comparatively high kinetic temperature (capable of atomizing virtually every compound of any sample), its high excitation and ionization temperatures, and its favourable spatial structure at the core of the ICP success. By now, the ICP-AES can be considered as having achieved maturity in that a huge amount of analytical problems can be tackled with this technique, while no major or fundamental changes have been adopted for several years. Despite this fact, important driving forces are still in operation to further improve the ICP-AES sensitivity, selectivity, precision, sample throughput, etc. Moreover, proposals to extend the scope of the technique to traditionally elusive fields (e.g. non-metals and organic compound analysis) are also appearing in the recent literature. In this paper the 'state of the art', the last developments and the expectations in trying to circumvent the limitations of the ICP-AES (on the light of literature data and personal experience) are reviewed. (author)

  6. [Multi-channel in vivo recording techniques: analysis of phase coupling between spikes and rhythmic oscillations of local field potentials].

    Science.gov (United States)

    Wang, Ce-Qun; Chen, Qiang; Zhang, Lu; Xu, Jia-Min; Lin, Long-Nian

    2014-12-25

    The purpose of this article is to introduce the measurements of phase coupling between spikes and rhythmic oscillations of local field potentials (LFPs). Multi-channel in vivo recording techniques allow us to record ensemble neuronal activity and LFPs simultaneously from the same sites in the brain. Neuronal activity is generally characterized by temporal spike sequences, while LFPs contain oscillatory rhythms in different frequency ranges. Phase coupling analysis can reveal the temporal relationships between neuronal firing and LFP rhythms. As the first step, the instantaneous phase of LFP rhythms can be calculated using Hilbert transform, and then for each time-stamped spike occurred during an oscillatory epoch, we marked instantaneous phase of the LFP at that time stamp. Finally, the phase relationships between the neuronal firing and LFP rhythms were determined by examining the distribution of the firing phase. Phase-locked spikes are revealed by the non-random distribution of spike phase. Theta phase precession is a unique phase relationship between neuronal firing and LFPs, which is one of the basic features of hippocampal place cells. Place cells show rhythmic burst firing following theta oscillation within a place field. And phase precession refers to that rhythmic burst firing shifted in a systematic way during traversal of the field, moving progressively forward on each theta cycle. This relation between phase and position can be described by a linear model, and phase precession is commonly quantified with a circular-linear coefficient. Phase coupling analysis helps us to better understand the temporal information coding between neuronal firing and LFPs.

  7. Imprecision of dose predictions for radionuclides released to the atmosphere: an application of the Monte Carlo-simulation-technique for iodine transported via the pasture-cow-milk pathway

    International Nuclear Information System (INIS)

    Schwarz, G.; Hoffman, F.O.

    1979-01-01

    The shortcomings of using mathematical models to determine compliance with regulatory standards are discussed. Methods to determine the reliability of radiation assessment models are presented. Since field testing studies are impractical, a deficiency method, which analyzes the variability of input parameters and the impact of their variability on the predicted dose, is used. The Monte Carlo technique is one of these methods. This technique is based on statistical properties of the model output when input parameters inserted in the model are selected at random from a prescribed distribution. The one big assumption one must make is that the model is a correct formulation of reality. The Gaussian plume model for atmospheric transport of airborne effluents was used to study the pasture-cow-milk-man exposure pathway and the dose calculated from radioiodine ( 131 I) transported over this pathway

  8. Imprecision of dose predictions for radionuclides released to the atmosphere: an application of the Monte Carlo-simulation-technique for iodine transported via the pasture-cow-milk pathway

    Energy Technology Data Exchange (ETDEWEB)

    Schwarz, G.; Hoffman, F.O.

    1979-01-01

    The shortcomings of using mathematical models to determine compliance with regulatory standards are discussed. Methods to determine the reliability of radiation assessment models are presented. Since field testing studies are impractical, a deficiency method, which analyzes the variability of input parameters and the impact of their variability on the predicted dose, is used. The Monte Carlo technique is one of these methods. This technique is based on statistical properties of the model output when input parameters inserted in the model are selected at random from a prescribed distribution. The one big assumption one must make is that the model is a correct formulation of reality. The Gaussian plume model for atmospheric transport of airborne effluents was used to study the pasture-cow-milk-man exposure pathway and the dose calculated from radioiodine (/sup 131/I) transported over this pathway. (DMC)

  9. Moisture content prediction in poultry litter using artificial intelligence techniques and Monte Carlo simulation to determine the economic yield from energy use.

    Science.gov (United States)

    Rico-Contreras, José Octavio; Aguilar-Lasserre, Alberto Alfonso; Méndez-Contreras, Juan Manuel; López-Andrés, Jhony Josué; Cid-Chama, Gabriela

    2017-11-01

    The objective of this study is to determine the economic return of poultry litter combustion in boilers to produce bioenergy (thermal and electrical), as this biomass has a high-energy potential due to its component elements, using fuzzy logic to predict moisture and identify the high-impact variables. This is carried out using a proposed 7-stage methodology, which includes a statistical analysis of agricultural systems and practices to identify activities contributing to moisture in poultry litter (for example, broiler chicken management, number of air extractors, and avian population density), and thereby reduce moisture to increase the yield of the combustion process. Estimates of poultry litter production and heating value are made based on 4 different moisture content percentages (scenarios of 25%, 30%, 35%, and 40%), and then a risk analysis is proposed using the Monte Carlo simulation to select the best investment alternative and to estimate the environmental impact for greenhouse gas mitigation. The results show that dry poultry litter (25%) is slightly better for combustion, generating 3.20% more energy. Reducing moisture from 40% to 25% involves considerable economic investment due to the purchase of equipment to reduce moisture; thus, when calculating financial indicators, the 40% scenario is the most attractive, as it is the current scenario. Thus, this methodology proposes a technology approach based on the use of advanced tools to predict moisture and representation of the system (Monte Carlo simulation), where the variability and uncertainty of the system are accurately represented. Therefore, this methodology is considered generic for any bioenergy generation system and not just for the poultry sector, whether it uses combustion or another type of technology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Microextraction Techniques Coupled to Liquid Chromatography with Mass Spectrometry for the Determination of Organic Micropollutants in Environmental Water Samples

    Directory of Open Access Journals (Sweden)

    Mª Esther Torres Padrón

    2014-07-01

    Full Text Available Until recently, sample preparation was carried out using traditional techniques, such as liquid–liquid extraction (LLE, that use large volumes of organic solvents. Solid-phase extraction (SPE uses much less solvent than LLE, although the volume can still be significant. These preparation methods are expensive, time-consuming and environmentally unfriendly. Recently, a great effort has been made to develop new analytical methodologies able to perform direct analyses using miniaturised equipment, thereby achieving high enrichment factors, minimising solvent consumption and reducing waste. These microextraction techniques improve the performance during sample preparation, particularly in complex water environmental samples, such as wastewaters, surface and ground waters, tap waters, sea and river waters. Liquid chromatography coupled to tandem mass spectrometry (LC/MS/MS and time-of-flight mass spectrometric (TOF/MS techniques can be used when analysing a broad range of organic micropollutants. Before separating and detecting these compounds in environmental samples, the target analytes must be extracted and pre-concentrated to make them detectable. In this work, we review the most recent applications of microextraction preparation techniques in different water environmental matrices to determine organic micropollutants: solid-phase microextraction SPME, in-tube solid-phase microextraction (IT-SPME, stir bar sorptive extraction (SBSE and liquid-phase microextraction (LPME. Several groups of compounds are considered organic micropollutants because these are being released continuously into the environment. Many of these compounds are considered emerging contaminants. These analytes are generally compounds that are not covered by the existing regulations and are now detected more frequently in different environmental compartments. Pharmaceuticals, surfactants, personal care products and other chemicals are considered micropollutants. These

  11. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  12. Monte Carlo method to characterize radioactive waste drums

    International Nuclear Information System (INIS)

    Lima, Josenilson B.; Dellamano, Jose C.; Potiens Junior, Ademar J.

    2013-01-01

    Non-destructive methods for radioactive waste drums characterization have being developed in the Waste Management Department (GRR) at Nuclear and Energy Research Institute IPEN. This study was conducted as part of the radioactive wastes characterization program in order to meet specifications and acceptance criteria for final disposal imposed by regulatory control by gamma spectrometry. One of the main difficulties in the detectors calibration process is to obtain the counting efficiencies that can be solved by the use of mathematical techniques. The aim of this work was to develop a methodology to characterize drums using gamma spectrometry and Monte Carlo method. Monte Carlo is a widely used mathematical technique, which simulates the radiation transport in the medium, thus obtaining the efficiencies calibration of the detector. The equipment used in this work is a heavily shielded Hyperpure Germanium (HPGe) detector coupled with an electronic setup composed of high voltage source, amplifier and multiport multichannel analyzer and MCNP software for Monte Carlo simulation. The developing of this methodology will allow the characterization of solid radioactive wastes packed in drums and stored at GRR. (author)

  13. Extensions of the MCNP5 and TRIPOLI4 Monte Carlo Codes for Transient Reactor Analysis

    Science.gov (United States)

    Hoogenboom, J. Eduard; Sjenitzer, Bart L.

    2014-06-01

    To simulate reactor transients for safety analysis with the Monte Carlo method the generation and decay of delayed neutron precursors is implemented in the MCNP5 and TRIPOLI4 general purpose Monte Carlo codes. Important new variance reduction techniques like forced decay of precursors in each time interval and the branchless collision method are included to obtain reasonable statistics for the power production per time interval. For simulation of practical reactor transients also the feedback effect from the thermal-hydraulics must be included. This requires coupling of the Monte Carlo code with a thermal-hydraulics (TH) code, providing the temperature distribution in the reactor, which affects the neutron transport via the cross section data. The TH code also provides the coolant density distribution in the reactor, directly influencing the neutron transport. Different techniques for this coupling are discussed. As a demonstration a 3x3 mini fuel assembly with a moving control rod is considered for MCNP5 and a mini core existing of 3x3 PWR fuel assemblies with control rods and burnable poisons for TRIPOLI4. Results are shown for reactor transients due to control rod movement or withdrawal. The TRIPOLI4 transient calculation is started at low power and includes thermal-hydraulic feedback. The power rises about 10 decades and finally stabilises the reactor power at a much higher level than initial. The examples demonstrate that the modified Monte Carlo codes are capable of performing correct transient calculations, taking into account all geometrical and cross section detail.

  14. Monte Carlo learning/biasing experiment with intelligent random numbers

    International Nuclear Information System (INIS)

    Booth, T.E.

    1985-01-01

    A Monte Carlo learning and biasing technique is described that does its learning and biasing in the random number space rather than the physical phase-space. The technique is probably applicable to all linear Monte Carlo problems, but no proof is provided here. Instead, the technique is illustrated with a simple Monte Carlo transport problem. Problems encountered, problems solved, and speculations about future progress are discussed. 12 refs

  15. Global Monte Carlo Simulation with High Order Polynomial Expansions

    International Nuclear Information System (INIS)

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

    2007-01-01

    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as 'local' piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi's method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source convergence

  16. Techniques for asynchronous and periodically-synchronous coupling of atmosphere and ocean models. Pt. 1. General strategy and application to the cyclo-stationary case

    Energy Technology Data Exchange (ETDEWEB)

    Sausen, R [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere; Voss, R [Deutsches Klimarechenzentrum (DKRZ), Hamburg (Germany)

    1995-07-01

    Asynchronous and periodically-synchronous schemes for coupling atmosphere and ocean models are presented. The performance of the schemes is tested by simulating the climatic response to a step function forcing and to a gradually increasing forcing with a simple zero-dimensional non-linear energy balance model. Both the initial transient response and the asymptotic approach of the equilibrium state are studied. If no annual cycle is allowed the asynchronous coupling technique proves to be a suitable tool. However, if the annual cycle is retained, the periodically-synchronous coupling technique reproduces the results of the synchronously coupled runs with smaller bias. In this case it is important that the total length of one synchronous period and one ocean only period is not a multiple of 6 months. (orig.)

  17. New Product Development in an Emerging Economy: Analysing the Role of Supplier Involvement Practices by Using Bayesian Markov Chain Monte Carlo Technique

    Directory of Open Access Journals (Sweden)

    Kanagi Kanapathy

    2014-01-01

    Full Text Available The research question is whether the positive relationship found between supplier involvement practices and new product development performances in developed economies also holds in emerging economies. The role of supplier involvement practices in new product development performance is yet to be substantially investigated in the emerging economies (other than China. This premise was examined by distributing a survey instrument (Jayaram’s (2008 published survey instrument that has been utilised in developed economies to Malaysian manufacturing companies. To gauge the relationship between the supplier involvement practices and new product development (NPD project performance of 146 companies, structural equation modelling was adopted. Our findings prove that supplier involvement practices have a significant positive impact on NPD project performance in an emerging economy with respect to quality objectives, design objectives, cost objectives, and “time-to-market” objectives. Further analysis using the Bayesian Markov Chain Monte Carlo algorithm, yielding a more credible and feasible differentiation, confirmed these results (even in the case of an emerging economy and indicated that these practices have a 28% impact on variance of NPD project performance. This considerable effect implies that supplier involvement is a must have, although further research is needed to identify the contingencies for its practices.

  18. Amorphous silicon EPID calibration for dosimetric applications: comparison of a method based on Monte Carlo prediction of response with existing techniques

    International Nuclear Information System (INIS)

    Parent, L; Fielding, A L; Dance, D R; Seco, J; Evans, P M

    2007-01-01

    For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference 2 ) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm 2 were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed

  19. Characterizing the Trade Space Between Capability and Complexity in Next Generation Cloud and Precipitation Observing Systems Using Markov Chain Monte Carlos Techniques

    Science.gov (United States)

    Xu, Z.; Mace, G. G.; Posselt, D. J.

    2017-12-01

    As we begin to contemplate the next generation atmospheric observing systems, it will be critically important that we are able to make informed decisions regarding the trade space between scientific capability and the need to keep complexity and cost within definable limits. To explore this trade space as it pertains to understanding key cloud and precipitation processes, we are developing a Markov Chain Monte Carlo (MCMC) algorithm suite that allows us to arbitrarily define the specifications of candidate observing systems and then explore how the uncertainties in key retrieved geophysical parameters respond to that observing system. MCMC algorithms produce a more complete posterior solution space, and allow for an objective examination of information contained in measurements. In our initial implementation, MCMC experiments are performed to retrieve vertical profiles of cloud and precipitation properties from a spectrum of active and passive measurements collected by aircraft during the ACE Radiation Definition Experiments (RADEX). Focusing on shallow cumulus clouds observed during the Integrated Precipitation and Hydrology EXperiment (IPHEX), observing systems in this study we consider W and Ka-band radar reflectivity, path-integrated attenuation at those frequencies, 31 and 94 GHz brightness temperatures as well as visible and near-infrared reflectance. By varying the sensitivity and uncertainty of these measurements, we quantify the capacity of various combinations of observations to characterize the physical properties of clouds and precipitation.

  20. Physics and dynamics coupling across scales in the next generation CESM: Meeting the challenge of high resolution. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Vincent E.

    2015-02-21

    This is a final report for a SciDAC grant supported by BER. The project implemented a novel technique for coupling small-scale dynamics and microphysics into a community climate model. The technique uses subcolumns that are sampled in Monte Carlo fashion from a distribution of subgrid variability. The resulting global simulations show several improvements over the status quo.

  1. Monte Carlo source convergence and the Whitesides problem

    International Nuclear Information System (INIS)

    Blomquist, R. N.

    2000-01-01

    The issue of fission source convergence in Monte Carlo eigenvalue calculations is of interest because of the potential consequences of erroneous criticality safety calculations. In this work, the authors compare two different techniques to improve the source convergence behavior of standard Monte Carlo calculations applied to challenging source convergence problems. The first method, super-history powering, attempts to avoid discarding important fission sites between generations by delaying stochastic sampling of the fission site bank until after several generations of multiplication. The second method, stratified sampling of the fission site bank, explicitly keeps the important sites even if conventional sampling would have eliminated them. The test problems are variants of Whitesides' Criticality of the World problem in which the fission site phase space was intentionally undersampled in order to induce marginally intolerable variability in local fission site populations. Three variants of the problem were studied, each with a different degree of coupling between fissionable pieces. Both the superhistory powering method and the stratified sampling method were shown to improve convergence behavior, although stratified sampling is more robust for the extreme case of no coupling. Neither algorithm completely eliminates the loss of the most important fissionable piece, and if coupling is absent, the lost piece cannot be recovered unless its sites from earlier generations have been retained. Finally, criteria for measuring source convergence reliability are proposed and applied to the test problems

  2. Quantum Monte Carlo approaches for correlated systems

    CERN Document Server

    Becca, Federico

    2017-01-01

    Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...

  3. Amorphous silicon EPID calibration for dosimetric applications: comparison of a method based on Monte Carlo prediction of response with existing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Parent, L [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom); Fielding, A L [School of Physical and Chemical Sciences, Queensland University of Technology, Brisbane (Australia); Dance, D R [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London (United Kingdom); Seco, J [Department of Radiation Oncology, Francis Burr Proton Therapy Center, Massachusetts General Hospital, Harvard Medical School, Boston (United States); Evans, P M [Joint Department of Physics, Institute of Cancer Research and Royal Marsden NHS Foundation Trust, Sutton (United Kingdom)

    2007-07-21

    For EPID dosimetry, the calibration should ensure that all pixels have a similar response to a given irradiation. A calibration method (MC), using an analytical fit of a Monte Carlo simulated flood field EPID image to correct for the flood field image pixel intensity shape, was proposed. It was compared with the standard flood field calibration (FF), with the use of a water slab placed in the beam to flatten the flood field (WS) and with a multiple field calibration where the EPID was irradiated with a fixed 10 x 10 field for 16 different positions (MF). The EPID was used in its normal configuration (clinical setup) and with an additional 3 mm copper slab (modified setup). Beam asymmetry measured with a diode array was taken into account in MC and WS methods. For both setups, the MC method provided pixel sensitivity values within 3% of those obtained with the MF and WS methods (mean difference <1%, standard deviation <2%). The difference of pixel sensitivity between MC and FF methods was up to 12.2% (clinical setup) and 11.8% (modified setup). MC calibration provided images of open fields (5 x 5 to 20 x 20 cm{sup 2}) and IMRT fields to within 3% of that obtained with WS and MF calibrations while differences with images calibrated with the FF method for fields larger than 10 x 10 cm{sup 2} were up to 8%. MC, WS and MF methods all provided a major improvement on the FF method. Advantages and drawbacks of each method were reviewed.

  4. On coupling fluid plasma and kinetic neutral physics models

    Directory of Open Access Journals (Sweden)

    I. Joseph

    2017-08-01

    Full Text Available The coupled fluid plasma and kinetic neutral physics equations are analyzed through theory and simulation of benchmark cases. It is shown that coupling methods that do not treat the coupling rates implicitly are restricted to short time steps for stability. Fast charge exchange, ionization and recombination coupling rates exist, even after constraining the solution by requiring that the neutrals are at equilibrium. For explicit coupling, the present implementation of Monte Carlo correlated sampling techniques does not allow for complete convergence in slab geometry. For the benchmark case, residuals decay with particle number and increase with grid size, indicating that they scale in a manner that is similar to the theoretical prediction for nonlinear bias error. Progress is reported on implementation of a fully implicit Jacobian-free Newton–Krylov coupling scheme. The present block Jacobi preconditioning method is still sensitive to time step and methods that better precondition the coupled system are under investigation.

  5. Coupled Electrokinetics-Adsorption Technique for Simultaneous Removal of Heavy Metals and Organics from Saline-Sodic Soil

    Science.gov (United States)

    Lukman, Salihu; Essa, Mohammed Hussain; Mu'azu, Nuhu Dalhat; Bukhari, Alaadin

    2013-01-01

    In situ remediation technologies for contaminated soils are faced with significant technical challenges when the contaminated soil has low permeability. Popular traditional technologies are rendered ineffective due to the difficulty encountered in accessing the contaminants as well as when employed in settings where the soil contains mixed contaminants such as petroleum hydrocarbons, heavy metals, and polar organics. In this study, an integrated in situ remediation technique that couples electrokinetics with adsorption, using locally produced granular activated carbon from date palm pits in the treatment zones that are installed directly to bracket the contaminated soils at bench-scale, is investigated. Natural saline-sodic soil, spiked with contaminant mixture (kerosene, phenol, Cr, Cd, Cu, Zn, Pb, and Hg), was used in this study to investigate the efficiency of contaminant removal. For the 21-day period of continuous electrokinetics-adsorption experimental run, efficiency for the removal of Zn, Pb, Cu, Cd, Cr, Hg, phenol, and kerosene was found to reach 26.8, 55.8, 41.0, 34.4, 75.9, 92.49, 100.0, and 49.8%, respectively. The results obtained suggest that integrating adsorption into electrokinetic technology is a promising solution for removal of contaminant mixture from saline-sodic soils. PMID:24235885

  6. Coupled Electrokinetics-Adsorption Technique for Simultaneous Removal of Heavy Metals and Organics from Saline-Sodic Soil

    Directory of Open Access Journals (Sweden)

    Salihu Lukman

    2013-01-01

    Full Text Available In situ remediation technologies for contaminated soils are faced with significant technical challenges when the contaminated soil has low permeability. Popular traditional technologies are rendered ineffective due to the difficulty encountered in accessing the contaminants as well as when employed in settings where the soil contains mixed contaminants such as petroleum hydrocarbons, heavy metals, and polar organics. In this study, an integrated in situ remediation technique that couples electrokinetics with adsorption, using locally produced granular activated carbon from date palm pits in the treatment zones that are installed directly to bracket the contaminated soils at bench-scale, is investigated. Natural saline-sodic soil, spiked with contaminant mixture (kerosene, phenol, Cr, Cd, Cu, Zn, Pb, and Hg, was used in this study to investigate the efficiency of contaminant removal. For the 21-day period of continuous electrokinetics-adsorption experimental run, efficiency for the removal of Zn, Pb, Cu, Cd, Cr, Hg, phenol, and kerosene was found to reach 26.8, 55.8, 41.0, 34.4, 75.9, 92.49, 100.0, and 49.8%, respectively. The results obtained suggest that integrating adsorption into electrokinetic technology is a promising solution for removal of contaminant mixture from saline-sodic soils.

  7. Solid-Phase Extraction Coupled to a Paper-Based Technique for Trace Copper Detection in Drinking Water.

    Science.gov (United States)

    Quinn, Casey W; Cate, David M; Miller-Lionberg, Daniel D; Reilly, Thomas; Volckens, John; Henry, Charles S

    2018-03-20

    Metal contamination of natural and drinking water systems poses hazards to public and environmental health. Quantifying metal concentrations in water typically requires sample collection in the field followed by expensive laboratory analysis that can take days to weeks to obtain results. The objective of this work was to develop a low-cost, field-deployable method to quantify trace levels of copper in drinking water by coupling solid-phase extraction/preconcentration with a microfluidic paper-based analytical device. This method has the advantages of being hand-powered (instrument-free) and using a simple "read by eye" quantification motif (based on color distance). Tap water samples collected across Fort Collins, CO, were tested with this method and validated against ICP-MS. We demonstrate the ability to quantify the copper content of tap water within 30% of a reference technique at levels ranging from 20 to 500 000 ppb. The application of this technology, which should be sufficient as a rapid screening tool, can lead to faster, more cost-effective detection of soluble metals in water systems.

  8. Analysis of Drug Design for a Selection of G Protein-Coupled Neuro-Receptors Using Neural Network Techniques

    DEFF Research Database (Denmark)

    Agerskov, Claus; Mortensen, Rasmus M.; Bohr, Henrik G.

    2015-01-01

    A study is presented on how well possible drug-molecules can be predicted with respect to their function and binding to a selection of neuro-receptors by the use of artificial neural networks. The ligands investigated in this study are chosen to be corresponding to the G protein-coupled receptors...... computational tools, able to aid in drug-design in a fast and cheap fashion, compared to conventional pharmacological techniques....... mu-opioid, serotonin 2B (5-HT2B) and metabotropic glutamate D5. They are selected due to the availability of pharmacological drug-molecule binding data for these receptors. Feedback and deep belief artificial neural network architectures (NNs) were chosen to perform the task of aiding drug-design.......925. The performance of 8 category networks (8 output classes for binding strength) obtained a prediction accuracy of above 60 %. After training the networks, tests were done on how well the systems could be used as an aid in designing candidate drug molecules. Specifically, it was shown how a selection of chemical...

  9. Coupled electrokinetics-adsorption technique for simultaneous removal of heavy metals and organics from saline-sodic soil.

    Science.gov (United States)

    Lukman, Salihu; Essa, Mohammed Hussain; Mu'azu, Nuhu Dalhat; Bukhari, Alaadin

    2013-01-01

    In situ remediation technologies for contaminated soils are faced with significant technical challenges when the contaminated soil has low permeability. Popular traditional technologies are rendered ineffective due to the difficulty encountered in accessing the contaminants as well as when employed in settings where the soil contains mixed contaminants such as petroleum hydrocarbons, heavy metals, and polar organics. In this study, an integrated in situ remediation technique that couples electrokinetics with adsorption, using locally produced granular activated carbon from date palm pits in the treatment zones that are installed directly to bracket the contaminated soils at bench-scale, is investigated. Natural saline-sodic soil, spiked with contaminant mixture (kerosene, phenol, Cr, Cd, Cu, Zn, Pb, and Hg), was used in this study to investigate the efficiency of contaminant removal. For the 21-day period of continuous electrokinetics-adsorption experimental run, efficiency for the removal of Zn, Pb, Cu, Cd, Cr, Hg, phenol, and kerosene was found to reach 26.8, 55.8, 41.0, 34.4, 75.9, 92.49, 100.0, and 49.8%, respectively. The results obtained suggest that integrating adsorption into electrokinetic technology is a promising solution for removal of contaminant mixture from saline-sodic soils.

  10. Parallel Monte Carlo reactor neutronics

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Brown, F.B.

    1994-01-01

    The issues affecting implementation of parallel algorithms for large-scale engineering Monte Carlo neutron transport simulations are discussed. For nuclear reactor calculations, these include load balancing, recoding effort, reproducibility, domain decomposition techniques, I/O minimization, and strategies for different parallel architectures. Two codes were parallelized and tested for performance. The architectures employed include SIMD, MIMD-distributed memory, and workstation network with uneven interactive load. Speedups linear with the number of nodes were achieved

  11. Physicochemical characterization of titanium dioxide pigments using various techniques for size determination and asymmetric flow field flow fractionation hyphenated with inductively coupled plasma mass spectrometry

    NARCIS (Netherlands)

    Helsper, Hans; Peters, Ruud J.B.; Bemmel, van Greet; Herrera Rivera, Zahira; Wagner, Stephan; Kammer, von der Frank; Tromp, Peter C.; Hofmann, Thilo; Weigel, Stefan

    2016-01-01

    Seven commercial titanium dioxide pigments and two other well-defined TiO2 materials (TiMs) were physicochemically characterised using asymmetric flow field flow fractionation (aF4) for separation, various techniques to determine size distribution and inductively coupled plasma mass

  12. Physicochemical characterization of titanium dioxide pigments using various techniques for size determination and asymmetric flow field flow fractionation hyphenated with inductively coupled plasma mass spectrometry

    NARCIS (Netherlands)

    Helsper, J.P.F.G.; Peters, R.J.B.; Bemmel, M.E.M. van; Rivera, Z.E.H.; Wagner, S.; Kammer, F. von der; Tromp, P.C.; Hofmann, T.; Weigel, S.

    2016-01-01

    Seven commercial titanium dioxide pigments and two other well-defined TiO2 materials (TiMs) were physicochemically characterised using asymmetric flow field flow fractionation (aF4) for separation, various techniques to determine size distribution and inductively coupled plasma mass spectrometry

  13. Discrete diffusion Monte Carlo for frequency-dependent radiative transfer

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Thompson, Kelly G.; Urbatsch, Todd J.

    2011-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Implicit Monte Carlo radiative-transfer simulations. In this paper, we develop an extension of DDMC for frequency-dependent radiative transfer. We base our new DDMC method on a frequency integrated diffusion equation for frequencies below a specified threshold. Above this threshold we employ standard Monte Carlo. With a frequency-dependent test problem, we confirm the increased efficiency of our new DDMC technique. (author)

  14. New Approaches and Applications for Monte Carlo Perturbation Theory

    Energy Technology Data Exchange (ETDEWEB)

    Aufiero, Manuele; Bidaud, Adrien; Kotlyar, Dan; Leppänen, Jaakko; Palmiotti, Giuseppe; Salvatores, Massimo; Sen, Sonat; Shwageraus, Eugene; Fratoni, Massimiliano

    2017-02-01

    This paper presents some of the recent and new advancements in the extension of Monte Carlo Perturbation Theory methodologies and application. In particular, the discussed problems involve Brunup calculation, perturbation calculation based on continuous energy functions, and Monte Carlo Perturbation Theory in loosely coupled systems.

  15. Present status of transport code development based on Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki

    1985-01-01

    The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)

  16. Parallel path nebulizer: Critical parameters for use with microseparation techniques combined with inductively coupled plasma mass spectrometry

    International Nuclear Information System (INIS)

    Yanes, Enrique G.; Miller-Ihli, Nancy J.

    2005-01-01

    Four different, low flow parallel path Mira Mist CE nebulizers were evaluated and compared in support of an ongoing project related to the use of microseparation techniques interfaced to inductively coupled plasma mass spectrometry for the quantification of cobalamin species (Vitamin B12). For the characterization of the different Mira Mist CE nebulizers, the nebulizer orientation as well as the effect of methanol on analytical response was the focus of the study. The position of the gas outlet on the nebulizer which consistently provided the maximum signal was when it was rotated to the 11 o'clock position when the nebulizer is viewed end-on. With this orientation the increased signal may be explained by the fact that the cone angle of the aerosol is such that the largest percentage of the aerosol is directed to the center of the spray chamber and consequently into the plasma. To characterize the nebulizer's performance, the signal response of a multielement solution containing elements with a variety of ionization potentials was used. The selection of elements with varying ionization energies and degrees of ionization was essential for a better understanding of observed increases in signal enhancement when methanol was used. Two different phenomena contribute to signal enhancement when using methanol: the first is improved transport efficiency and the second is the 'carbon enhancement effect'. The net result was that as much as a 30-fold increase in signal was observed for As and Mg when using a make-up solution of 20% methanol at a 15 μL/min flow rate which is equivalent to a net volume of 3 μL/min of pure methanol

  17. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  18. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  19. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  20. The neutron and gamma-ray dose characterization using the Monte Carlo method to study the feasibility of the Prompt Gamma Activation Analysis technique at IPR-R1 TRIGA reactor in Brazil

    Energy Technology Data Exchange (ETDEWEB)

    Guerra, Bruno T.; Soares, Alexandre L.; Grynberg, Suely E.; Menezes, Maria Angela B.C., E-mail: brunoteixeiraguerra@yahoo.com.br, E-mail: menezes@cdtn.br, E-mail: asleal@cdtn.br, E-mail: seg@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2013-07-01

    The IPR-R1 is a reactor type TRIGA, Mark-I model, manufactured by the General Atomic Company and installed at Nuclear Technology Development Centre (CDTN) of Brazilian Nuclear Energy Commission (CNEN), in Belo Horizonte, Brazil. It is a light water moderated and cooled, graphite-reflected, open-pool type research reactor. IPR-R1 works at 100 kW but it will be briefly licensed to operate at 250 kW. It presents low power, low pressure, for application in research, training and radioisotopes production. The fuel is an alloy of zirconium hydride and uranium enriched at 20% in {sup 235}U. The Implementation of the PGNAA (Prompt Gamma Neutron Activation Analysis) Technical at the TRIGA IPR-R1 research reactor of the CDTN will significantly increase in the types of matrices analyzable. A project is underway in order to implement this technique in CDTN. In order of verified the feasibility of the PGNAA at the TRIGA reactor, the MCNP (Monte Carlo N-Particle) method is used to theoretical calculations. This paper presents the results of a preliminary study of the neutron and gamma-ray dose in the room where the reactor is located, in case of implementation of this technique in the IPR-R1. (author)

  1. The neutron and gamma-ray dose characterization using the Monte Carlo method to study the feasibility of the Prompt Gamma Activation Analysis technique at IPR-R1 TRIGA reactor in Brazil

    International Nuclear Information System (INIS)

    Guerra, Bruno T.; Soares, Alexandre L.; Grynberg, Suely E.; Menezes, Maria Angela B.C.

    2013-01-01

    The IPR-R1 is a reactor type TRIGA, Mark-I model, manufactured by the General Atomic Company and installed at Nuclear Technology Development Centre (CDTN) of Brazilian Nuclear Energy Commission (CNEN), in Belo Horizonte, Brazil. It is a light water moderated and cooled, graphite-reflected, open-pool type research reactor. IPR-R1 works at 100 kW but it will be briefly licensed to operate at 250 kW. It presents low power, low pressure, for application in research, training and radioisotopes production. The fuel is an alloy of zirconium hydride and uranium enriched at 20% in 235 U. The Implementation of the PGNAA (Prompt Gamma Neutron Activation Analysis) Technical at the TRIGA IPR-R1 research reactor of the CDTN will significantly increase in the types of matrices analyzable. A project is underway in order to implement this technique in CDTN. In order of verified the feasibility of the PGNAA at the TRIGA reactor, the MCNP (Monte Carlo N-Particle) method is used to theoretical calculations. This paper presents the results of a preliminary study of the neutron and gamma-ray dose in the room where the reactor is located, in case of implementation of this technique in the IPR-R1. (author)

  2. State of the art of Monte Carlo technics for reliable activated waste evaluations

    International Nuclear Information System (INIS)

    Culioli, Matthieu; Chapoutier, Nicolas; Barbier, Samuel; Janski, Sylvain

    2016-01-01

    This paper presents the calculation scheme used for many studies to assess the activities inventory of French shutdown reactors (including Pressurized Water Reactor, Heavy Water Reactor, Sodium-Cooled Fast Reactor and Natural Uranium Gas Cooled or UNGG). This calculation scheme is based on Monte Carlo calculations (MCNP) and involves advanced technique for source modeling, geometry modeling (with Computer-Aided Design integration), acceleration methods and depletion calculations coupling on 3D meshes. All these techniques offer efficient and reliable evaluations on large scale model with a high level of details reducing the risks of underestimation or conservatisms. (authors)

  3. A hybrid transport-diffusion method for Monte Carlo radiative-transfer simulations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Urbatsch, Todd J.; Evans, Thomas M.; Buksas, Michael W.

    2007-01-01

    Discrete Diffusion Monte Carlo (DDMC) is a technique for increasing the efficiency of Monte Carlo particle-transport simulations in diffusive media. If standard Monte Carlo is used in such media, particle histories will consist of many small steps, resulting in a computationally expensive calculation. In DDMC, particles take discrete steps between spatial cells according to a discretized diffusion equation. Each discrete step replaces many small Monte Carlo steps, thus increasing the efficiency of the simulation. In addition, given that DDMC is based on a diffusion equation, it should produce accurate solutions if used judiciously. In practice, DDMC is combined with standard Monte Carlo to form a hybrid transport-diffusion method that can accurately simulate problems with both diffusive and non-diffusive regions. In this paper, we extend previously developed DDMC techniques in several ways that improve the accuracy and utility of DDMC for nonlinear, time-dependent, radiative-transfer calculations. The use of DDMC in these types of problems is advantageous since, due to the underlying linearizations, optically thick regions appear to be diffusive. First, we employ a diffusion equation that is discretized in space but is continuous in time. Not only is this methodology theoretically more accurate than temporally discretized DDMC techniques, but it also has the benefit that a particle's time is always known. Thus, there is no ambiguity regarding what time to assign a particle that leaves an optically thick region (where DDMC is used) and begins transporting by standard Monte Carlo in an optically thin region. Also, we treat the interface between optically thick and optically thin regions with an improved method, based on the asymptotic diffusion-limit boundary condition, that can produce accurate results regardless of the angular distribution of the incident Monte Carlo particles. Finally, we develop a technique for estimating radiation momentum deposition during the

  4. Comparison between immunomagnetic separation, coupled with immunofluorescence, and the techniques of Faust et al. and of Lutz for the diagnosis of Giardia lamblia cysts in human feces

    Directory of Open Access Journals (Sweden)

    Souza Doris Sobral Marques

    2003-01-01

    Full Text Available In the present study, the performance of Immunomagnetic Separation technique, coupled with Immunofluorescence (IMS-IFA, was compared with the FAUST et al. and Lutz parasitological techniques for the detection of Giardia lamblia cysts in human feces. One hundred and twenty-seven samples were evaluated by the three techniques at the same time showing a rate of cyst detection of 27.5% by IMS-IFA and 15.7% by both Faust et al. and Lutz techniques. Data analysis showed a higher sensitivity of IMS-IFA for the detection of G. lamblia cysts in comparison with the techniques of FAUST et al. and Lutz. The use of this methodology as a routine procedure enables the processing of many samples simultaneously, in order to increase recovery rate of G. lamblia cysts and reduce the time of sample storage.

  5. A radiating shock evaluated using Implicit Monte Carlo Diffusion

    International Nuclear Information System (INIS)

    Cleveland, M.; Gentile, N.

    2013-01-01

    Implicit Monte Carlo [1] (IMC) has been shown to be very expensive when used to evaluate a radiation field in opaque media. Implicit Monte Carlo Diffusion (IMD) [2], which evaluates a spatial discretized diffusion equation using a Monte Carlo algorithm, can be used to reduce the cost of evaluating the radiation field in opaque media [2]. This work couples IMD to the hydrodynamics equations to evaluate opaque diffusive radiating shocks. The Lowrie semi-analytic diffusive radiating shock benchmark[a] is used to verify our implementation of the coupled system of equations. (authors)

  6. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  7. Monte Carlo simulations for plasma physics

    International Nuclear Information System (INIS)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  8. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  9. Monte Carlo electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.; Morel, J.E.; Hughes, H.G.

    1985-01-01

    A review of nonplasma coupled electron/photon transport using Monte Carlo method is presented. Remarks are mainly restricted to linerarized formalisms at electron energies from 1 keV to 1000 MeV. Applications involving pulse-height estimation, transport in external magnetic fields, and optical Cerenkov production are discussed to underscore the importance of this branch of computational physics. Advances in electron multigroup cross-section generation is reported, and its impact on future code development assessed. Progress toward the transformation of MCNP into a generalized neutral/charged-particle Monte Carlo code is described. 48 refs

  10. Asymptotic response of observables from divergent weak-coupling expansions: A fractional-calculus-assisted Padé technique

    Science.gov (United States)

    Dhatt, Sharmistha; Bhattacharyya, Kamal

    2012-08-01

    Appropriate constructions of Padé approximants are believed to provide reasonable estimates of the asymptotic (large-coupling) amplitude and exponent of an observable, given its weak-coupling expansion to some desired order. In many instances, however, sequences of such approximants are seen to converge very poorly. We outline here a strategy that exploits the idea of fractional calculus to considerably improve the convergence behavior. Pilot calculations on the ground-state perturbative energy series of quartic, sextic, and octic anharmonic oscillators reveal clearly the worth of our endeavor.

  11. BREM5 electroweak Monte Carlo

    International Nuclear Information System (INIS)

    Kennedy, D.C. II.

    1987-01-01

    This is an update on the progress of the BREMMUS Monte Carlo simulator, particularly in its current incarnation, BREM5. The present report is intended only as a follow-up to the Mark II/Granlibakken proceedings, and those proceedings should be consulted for a complete description of the capabilities and goals of the BREMMUS program. The new BREM5 program improves on the previous version of BREMMUS, BREM2, in a number of important ways. In BREM2, the internal loop (oblique) corrections were not treated in consistent fashion, a deficiency that led to renormalization scheme-dependence; i.e., physical results, such as cross sections, were dependent on the method used to eliminate infinities from the theory. Of course, this problem cannot be tolerated in a Monte Carlo designed for experimental use. BREM5 incorporates a new way of treating the oblique corrections, as explained in the Granlibakken proceedings, that guarantees renormalization scheme-independence and dramatically simplifies the organization and calculation of radiative corrections. This technique is to be presented in full detail in a forthcoming paper. BREM5 is, at this point, the only Monte Carlo to contain the entire set of one-loop corrections to electroweak four-fermion processes and renormalization scheme-independence. 3 figures

  12. BLINDAGE: A neutron and gamma-ray transport code for shieldings with the removal-diffusion technique coupled with the point-kernel technique

    International Nuclear Information System (INIS)

    Fanaro, L.C.C.B.

    1984-01-01

    It was developed the BLINDAGE computer code for the radiation transport (neutrons and gammas) calculation. The code uses the removal - diffusion method for neutron transport and point-kernel technique with buil-up factors for gamma-rays. The results obtained through BLINDAGE code are compared with those obtained with the ANISN and SABINE computer codes. (Author) [pt

  13. Off-diagonal expansion quantum Monte Carlo.

    Science.gov (United States)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  14. Monte Carlo simulation of Markov unreliability models

    International Nuclear Information System (INIS)

    Lewis, E.E.; Boehm, F.

    1984-01-01

    A Monte Carlo method is formulated for the evaluation of the unrealibility of complex systems with known component failure and repair rates. The formulation is in terms of a Markov process allowing dependences between components to be modeled and computational efficiencies to be achieved in the Monte Carlo simulation. Two variance reduction techniques, forced transition and failure biasing, are employed to increase computational efficiency of the random walk procedure. For an example problem these result in improved computational efficiency by more than three orders of magnitudes over analog Monte Carlo. The method is generalized to treat problems with distributed failure and repair rate data, and a batching technique is introduced and shown to result in substantial increases in computational efficiency for an example problem. A method for separating the variance due to the data uncertainty from that due to the finite number of random walks is presented. (orig.)

  15. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  16. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  17. Hybrid SN/Monte Carlo research and results

    International Nuclear Information System (INIS)

    Baker, R.S.

    1993-01-01

    The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S N ) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S N regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S N method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S N is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

  18. Improving wind energy forecasts using an Ensemble Kalman Filter data assimilation technique in a fully coupled hydrologic and atmospheric model

    Science.gov (United States)

    Williams, J. L.; Maxwell, R. M.; Delle Monache, L.

    2012-12-01

    Wind power is rapidly gaining prominence as a major source of renewable energy. Harnessing this promising energy source is challenging because of the chaotic nature of wind and its propensity to change speed and direction over short time scales. Accurate forecasting tools are critical to support the integration of wind energy into power grids and to maximize its impact on renewable energy portfolios. Numerous studies have shown that soil moisture distribution and land surface vegetative processes profoundly influence atmospheric boundary layer development and weather processes on local and regional scales. Using the PF.WRF model, a fully-coupled hydrologic and atmospheric model employing the ParFlow hydrologic model with the Weather Research and Forecasting model coupled via mass and energy fluxes across the land surface, we have explored the connections between the land surface and the atmosphere in terms of land surface energy flux partitioning and coupled variable fields including hydraulic conductivity, soil moisture and wind speed, and demonstrated that reductions in uncertainty in these coupled fields propagate through the hydrologic and atmospheric system. We have adapted the Data Assimilation Research Testbed (DART), an implementation of the robust Ensemble Kalman Filter data assimilation algorithm, to expand our capability to nudge forecasts produced with the PF.WRF model using observational data. Using a semi-idealized simulation domain, we examine the effects of assimilating observations of variables such as wind speed and temperature collected in the atmosphere, and land surface and subsurface observations such as soil moisture on the quality of forecast outputs. The sensitivities we find in this study will enable further studies to optimize observation collection to maximize the utility of the PF.WRF-DART forecasting system.

  19. Estimation of ex-core detector responses by adjoint Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands)

    2006-07-01

    Ex-core detector responses can be efficiently calculated by combining an adjoint Monte Carlo calculation with the converged source distribution of a forward Monte Carlo calculation. As the fission source distribution from a Monte Carlo calculation is given only as a collection of discrete space positions, the coupling requires a point flux estimator for each collision in the adjoint calculation. To avoid the infinite variance problems of the point flux estimator, a next-event finite-variance point flux estimator has been applied, witch is an energy dependent form for heterogeneous media of a finite-variance estimator known from the literature. To test the effects of this combined adjoint-forward calculation a simple geometry of a homogeneous core with a reflector was adopted with a small detector in the reflector. To demonstrate the potential of the method the continuous-energy adjoint Monte Carlo technique with anisotropic scattering was implemented with energy dependent absorption and fission cross sections and constant scattering cross section. A gain in efficiency over a completely forward calculation of the detector response was obtained, which is strongly dependent on the specific system and especially the size and position of the ex-core detector and the energy range considered. Further improvements are possible. The method works without problems for small detectors, even for a point detector and a small or even zero energy range. (authors)

  20. Reliable method for fission source convergence of Monte Carlo criticality calculation with Wielandt's method

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro; Miyoshi, Yoshinori

    2004-01-01

    A new algorithm of Monte Carlo criticality calculations for implementing Wielandt's method, which is one of acceleration techniques for deterministic source iteration methods, is developed, and the algorithm can be successfully implemented into MCNP code. In this algorithm, part of fission neutrons emitted during random walk processes are tracked within the current cycle, and thus a fission source distribution used in the next cycle spread more widely. Applying this method intensifies a neutron interaction effect even in a loosely-coupled array where conventional Monte Carlo criticality methods have difficulties, and a converged fission source distribution can be obtained with fewer cycles. Computing time spent for one cycle, however, increases because of tracking fission neutrons within the current cycle, which eventually results in an increase of total computing time up to convergence. In addition, statistical fluctuations of a fission source distribution in a cycle are worsened by applying Wielandt's method to Monte Carlo criticality calculations. However, since a fission source convergence is attained with fewer source iterations, a reliable determination of convergence can easily be made even in a system with a slow convergence. This acceleration method is expected to contribute to prevention of incorrect Monte Carlo criticality calculations. (author)

  1. Auxiliary-Field Quantum Monte Carlo Simulations of Strongly-Correlated Molecules and Solids

    Energy Technology Data Exchange (ETDEWEB)

    Chang, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Morales, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-10

    We propose a method of implementing projected wave functions for second-quantized auxiliary-field quantum Monte Carlo (AFQMC) techniques. The method is based on expressing the two-body projector as one-body terms coupled to binary Ising fields. To benchmark the method, we choose to study the two-dimensional (2D) one-band Hubbard model with repulsive interactions using the constrained-path MC (CPMC). The CPMC uses a trial wave function to guide the random walks so that the so-called fermion sign problem can be eliminated. The trial wave function also serves as the importance function in Monte Carlo sampling. As such, the quality of the trial wave function has a direct impact to the efficiency and accuracy of the simulations.

  2. Tripoli-4, a three-dimensional poly-kinetic particle transport Monte-Carlo code

    International Nuclear Information System (INIS)

    Both, J.P.; Lee, Y.K.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B.; Soldevila, M.

    2003-01-01

    In this updated of the Monte-Carlo transport code Tripoli-4, we list and describe its current main features. The code computes coupled neutron-photon propagation as well as the electron-photon cascade shower. While providing the user with common biasing techniques, it also implements an automatic weighting scheme. Tripoli-4 enables the user to compute the following physical quantities: a flux, a multiplication factor, a current, a reaction rate, a dose equivalent rate as well as deposit of energy and recoil energies. For each interesting physical quantity, a Monte-Carlo simulation offers different types of estimators. Tripoli-4 has support for execution in parallel mode. Special features and applications are also presented

  3. Auxiliary-Field Quantum Monte Carlo Simulations of Strongly-Correlated Molecules and Solids

    International Nuclear Information System (INIS)

    Chang, C.; Morales, M. A.

    2016-01-01

    We propose a method of implementing projected wave functions for second-quantized auxiliary-field quantum Monte Carlo (AFQMC) techniques. The method is based on expressing the two-body projector as one-body terms coupled to binary Ising fields. To benchmark the method, we choose to study the two-dimensional (2D) one-band Hubbard model with repulsive interactions using the constrained-path MC (CPMC). The CPMC uses a trial wave function to guide the random walks so that the so-called fermion sign problem can be eliminated. The trial wave function also serves as the importance function in Monte Carlo sampling. As such, the quality of the trial wave function has a direct impact to the efficiency and accuracy of the simulations.

  4. A numerical analysis of antithetic variates in Monte Carlo radiation transport with geometrical surface splitting

    International Nuclear Information System (INIS)

    Sarkar, P.K.; Prasad, M.A.

    1989-01-01

    A numerical study for effective implementation of the antithetic variates technique with geometric splitting/Russian roulette in Monte Carlo radiation transport calculations is presented. The study is based on the theory of Monte Carlo errors where a set of coupled integral equations are solved for the first and second moments of the score and for the expected number of flights per particle history. Numerical results are obtained for particle transmission through an infinite homogeneous slab shield composed of an isotropically scattering medium. Two types of antithetic transformations are considered. The results indicate that the antithetic transformations always lead to reduction in variance and increase in efficiency provided optimal antithetic parameters are chosen. A substantial gain in efficiency is obtained by incorporating antithetic transformations in rule of thumb splitting. The advantage gained for thick slabs (∼20 mfp) with low scattering probability (0.1-0.5) is attractively large . (author). 27 refs., 9 tabs

  5. Tripoli-4, a three-dimensional poly-kinetic particle transport Monte-Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Both, J P; Lee, Y K; Mazzolo, A; Peneliau, Y; Petit, O; Roesslinger, B; Soldevila, M [CEA Saclay, Dir. de l' Energie Nucleaire (DEN/DM2S/SERMA/LEPP), 91 - Gif sur Yvette (France)

    2003-07-01

    In this updated of the Monte-Carlo transport code Tripoli-4, we list and describe its current main features. The code computes coupled neutron-photon propagation as well as the electron-photon cascade shower. While providing the user with common biasing techniques, it also implements an automatic weighting scheme. Tripoli-4 enables the user to compute the following physical quantities: a flux, a multiplication factor, a current, a reaction rate, a dose equivalent rate as well as deposit of energy and recoil energies. For each interesting physical quantity, a Monte-Carlo simulation offers different types of estimators. Tripoli-4 has support for execution in parallel mode. Special features and applications are also presented.

  6. Closed-shell variational quantum Monte Carlo simulation for the ...

    African Journals Online (AJOL)

    Closed-shell variational quantum Monte Carlo simulation for the electric dipole moment calculation of hydrazine molecule using casino-code. ... Nigeria Journal of Pure and Applied Physics ... The variational quantum Monte Carlo (VQMC) technique used in this work employed the restricted Hartree-Fock (RHF) scheme.

  7. Exponential convergence on a continuous Monte Carlo transport problem

    International Nuclear Information System (INIS)

    Booth, T.E.

    1997-01-01

    For more than a decade, it has been known that exponential convergence on discrete transport problems was possible using adaptive Monte Carlo techniques. An adaptive Monte Carlo method that empirically produces exponential convergence on a simple continuous transport problem is described

  8. Neutron point-flux calculation by Monte Carlo

    International Nuclear Information System (INIS)

    Eichhorn, M.

    1986-04-01

    A survey of the usual methods for estimating flux at a point is given. The associated variance-reducing techniques in direct Monte Carlo games are explained. The multigroup Monte Carlo codes MC for critical systems and PUNKT for point source-point detector-systems are represented, and problems in applying the codes to practical tasks are discussed. (author)

  9. Neutron flux calculation by means of Monte Carlo methods

    International Nuclear Information System (INIS)

    Barz, H.U.; Eichhorn, M.

    1988-01-01

    In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)

  10. Monte Carlo simulation of gas Cerenkov detectors

    International Nuclear Information System (INIS)

    Mack, J.M.; Jain, M.; Jordan, T.M.

    1984-01-01

    Theoretical study of selected gamma-ray and electron diagnostic necessitates coupling Cerenkov radiation to electron/photon cascades. A Cerenkov production model and its incorporation into a general geometry Monte Carlo coupled electron/photon transport code is discussed. A special optical photon ray-trace is implemented using bulk optical properties assigned to each Monte Carlo zone. Good agreement exists between experimental and calculated Cerenkov data in the case of a carbon-dioxide gas Cerenkov detector experiment. Cerenkov production and threshold data are presented for a typical carbon-dioxide gas detector that converts a 16.7 MeV photon source to Cerenkov light, which is collected by optics and detected by a photomultiplier

  11. Extensions of the MCNP5 and TRIPOLI4 Monte Carlo codes for transient reactor analysis

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    2013-01-01

    To simulate reactor transients for safety analysis with the Monte Carlo method the generation and decay of delayed neutron precursors is implemented in the MCNP5 and TRIPOLI4 general purpose Monte Carlo codes. Important new variance reduction techniques like forced decay of precursors in each time interval and the branch-less collision method are included to obtain reasonable statistics for the power production per time interval. For simulation of practical reactor transients also the feedback effect from the thermal-hydraulics must be included. This requires the coupling of the Monte Carlo code with a thermal-hydraulics (TH) code, providing the temperature distribution in the reactor, which affects the neutron transport via the cross section data. The TH code also provides the coolant density distribution in the reactor, directly influencing the neutron transport. Different techniques for this coupling are discussed. As a demonstration a 3*3 mini fuel assembly with a moving control rod is considered for MCNP5 and a mini core existing of 3*3 PWR fuel assemblies with control rods and burnable poisons for TRIPOLI4. Results are shown for reactor transients due to control rod movement or withdrawal. The TRIPOLI4 transient calculation is started at low power and includes thermal-hydraulic feedback. The power rises about 10 decades and finally stabilises the reactor power at a much higher level than initial. The examples demonstrate that the modified Monte Carlo codes are capable of performing correct transient calculations, taking into account all geometrical and cross section detail. (authors)

  12. Monte Carlo method in neutron activation analysis

    International Nuclear Information System (INIS)

    Majerle, M.; Krasa, A.; Svoboda, O.; Wagner, V.; Adam, J.; Peetermans, S.; Slama, O.; Stegajlov, V.I.; Tsupko-Sitnikov, V.M.

    2009-01-01

    Neutron activation detectors are a useful technique for the neutron flux measurements in spallation experiments. The study of the usefulness and the accuracy of this method at similar experiments was performed with the help of Monte Carlo codes MCNPX and FLUKA

  13. Monte Carlo simulation of the microcanonical ensemble

    International Nuclear Information System (INIS)

    Creutz, M.

    1984-01-01

    We consider simulating statistical systems with a random walk on a constant energy surface. This combines features of deterministic molecular dynamics techniques and conventional Monte Carlo simulations. For discrete systems the method can be programmed to run an order of magnitude faster than other approaches. It does not require high quality random numbers and may also be useful for nonequilibrium studies. 10 references

  14. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  15. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  16. CERN honours Carlo Rubbia

    CERN Document Server

    2009-01-01

    Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...

  17. Coupling spectroscopic and chromatographic techniques for evaluation of the depositional history of hydrocarbons in a subtropical estuary

    International Nuclear Information System (INIS)

    Martins, César C.; Doumer, Marta E.; Gallice, Wellington C.; Dauner, Ana Lúcia L.; Cabral, Ana Caroline; Cardoso, Fernanda D.

    2015-01-01

    Spectroscopic and chromatographic techniques can be used together to evaluate hydrocarbon inputs to coastal environments such as the Paranaguá estuarine system (PES), located in the SW Atlantic, Brazil. Historical inputs of aliphatic hydrocarbons (AHs) and polycyclic aromatic hydrocarbons (PAHs) were analyzed using two sediment cores from the PES. The AHs were related to the presence of biogenic organic matter and degraded oil residues. The PAHs were associated with mixed sources. The highest hydrocarbon concentrations were related to oil spills, while relatively low levels could be attributed to the decrease in oil usage during the global oil crisis. The results of electron paramagnetic resonance were in agreement with the absolute AHs and PAHs concentrations measured by chromatographic techniques, while near-infrared spectroscopy results were consistent with unresolved complex mixture (UCM)/total n-alkanes ratios. These findings suggest that the use of a combination of techniques can increase the accuracy of assessment of contamination in sediments. - Highlights: • Historical inputs of hydrocarbons in a subtropical estuary were evaluated. • Spectroscopic and chromatographic methods were used in combination. • High hydrocarbon concentrations were related to anthropogenic activities. • Low hydrocarbon levels could be explained by the 1970s global oil crisis. - Spectroscopic and chromatographic techniques could be used together to evaluate hydrocarbon inputs to coastal environments

  18. The potential of organic (electrospray- and atmospheric pressure chemical ionisation) mass spectrometric techniques coupled to liquid-phase separation for speciation analysis.

    Science.gov (United States)

    Rosenberg, Erwin

    2003-06-06

    The use of mass spectrometry based on atmospheric pressure ionisation techniques (atmospheric pressure chemical ionisation, APCI, and electrospray ionisation, ESI) for speciation analysis is reviewed with emphasis on the literature published in and after 1999. This report accounts for the increasing interest that atmospheric pressure ionisation techniques, and in particular ESI, have found in the past years for qualitative and quantitative speciation analysis. In contrast to element-selective detectors, organic mass spectrometric techniques provide information on the intact metal species which can be used for the identification of unknown species (particularly with MS-MS detection) or the confirmation of the actual presence of species in a given sample. Due to the complexity of real samples, it is inevitable in all but the simplest cases to couple atmospheric pressure MS detection to a separation technique. Separation in the liquid phase (capillary electrophoresis or liquid chromatography in reversed phase, ion chromatographic or size-exclusion mode) is particularly suitable since the available techniques cover a very wide range of analyte polarities and molecular mass. Moreover, derivatisation can normally be avoided in liquid-phase separation. Particularly in complex environmental or biological samples, separation in one dimension is not sufficient for obtaining adequate resolution for all relevant species. In this case, multi-dimensional separation, based on orthogonal separation techniques, has proven successful. ESI-MS is also often used in parallel with inductively coupled plasma MS detection. This review is structured in two parts. In the first, the fundamentals of atmospheric pressure ionisation techniques are briefly reviewed. The second part of the review discusses recent applications including redox species, use of ESI-MS for structural elucidation of metal complexes, characterisation and quantification of small organometallic species with relevance to

  19. Statistics of Monte Carlo methods used in radiation transport calculation

    International Nuclear Information System (INIS)

    Datta, D.

    2009-01-01

    Radiation transport calculation can be carried out by using either deterministic or statistical methods. Radiation transport calculation based on statistical methods is basic theme of the Monte Carlo methods. The aim of this lecture is to describe the fundamental statistics required to build the foundations of Monte Carlo technique for radiation transport calculation. Lecture note is organized in the following way. Section (1) will describe the introduction of Basic Monte Carlo and its classification towards the respective field. Section (2) will describe the random sampling methods, a key component of Monte Carlo radiation transport calculation, Section (3) will provide the statistical uncertainty of Monte Carlo estimates, Section (4) will describe in brief the importance of variance reduction techniques while sampling particles such as photon, or neutron in the process of radiation transport

  20. Discrimination of Inrush from Fault Currents in Power Transformers Based on Equivalent Instantaneous Inductance Technique Coupled with Finite Element Method

    Directory of Open Access Journals (Sweden)

    M. Jamali

    2011-09-01

    Full Text Available The phenomenon of magnetizing inrush is a transient condition, which occurs primarily when a transformer is energized. The magnitude of inrush current may be as high as ten times or more times of transformer rated current that causes malfunction of protection system. So, for safe running of a transformer, it is necessary to distinguish inrush current from fault currents. In this paper, an equivalent instantaneous inductance (EII technique is used to discriminate inrush current from fault currents. For this purpose, a three-phase power transformer has been simulated in Maxwell software that is based on finite elements. This three-phase power transformer has been used to simulate different conditions. Then, the results have been used as inputs in MATLAB program to implement the equivalent instantaneous inductance technique. The results show that in the case of inrush current, the equivalent instantaneous inductance has a drastic variation, while it is almost constant in the cases of fault conditions.

  1. Application of sensitivity analysis to a simplified coupled neutronic thermal-hydraulics transient in a fast reactor using Adjoint techniques

    International Nuclear Information System (INIS)

    Gilli, L.; Lathouwers, D.; Kloosterman, J.L.; Van der Hagen, T.H.J.J.

    2011-01-01

    In this paper a method to perform sensitivity analysis for a simplified multi-physics problem is presented. The method is based on the Adjoint Sensitivity Analysis Procedure which is used to apply first order perturbation theory to linear and nonlinear problems using adjoint techniques. The multi-physics problem considered includes a neutronic, a thermo-kinetics, and a thermal-hydraulics part and it is used to model the time dependent behavior of a sodium cooled fast reactor. The adjoint procedure is applied to calculate the sensitivity coefficients with respect to the kinetic parameters of the problem for two reference transients using two different model responses, the results obtained are then compared with the values given by a direct sampling of the forward nonlinear problem. Our first results show that, thanks to modern numerical techniques, the procedure is relatively easy to implement and provides good estimation for most perturbations, making the method appealing for more detailed problems. (author)

  2. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  3. [Online enrichment ability of restricted-access column coupled with high performance liquid chromatography by column switching technique for benazepril hydrochloride].

    Science.gov (United States)

    Zhang, Xiaohui; Wang, Rong; Xie, Hua; Yin, Qiang; Li, Xiaoyun; Jia, Zhengping; Wu, Xiaoyu; Zhang, Juanhong; Li, Wenbin

    2013-05-01

    The online enrichment ability of the restricted-access media (RAM) column coupled with high performance liquid chromatography by column switching technique for benazepril hydrochloride in plasma was studied. The RAM-HPLC system consisted of an RAM column as enrichment column and a C18 column as analytical column coupled via the column switching technique. The effects of the injection volume on the peak area and the systematic pressure were studied. When the injection volume was less than 100 microL, the peak area increased with the increase of the injection volume. However, when the injection volume was more than 80 microL, the pressure of whole system increased obviously. In order to protect the whole system, 80 microL was chosen as the maximum injection volume. The peak areas of ordinary injection and the large volume injection showed a good linear relationship. The enrichment ability of RAM-HPLC system was satisfactory. The system was successfully used for the separation and detection of the trace benazepril hydrochloride in rat plasma after its administration. The sensitivity of HPLC can be improved by RAM pre-enrichment. It is a simple and economic measurement method.

  4. Carlo Caso (1940 - 2007)

    CERN Multimedia

    Leonardo Rossi

    Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...

  5. Markov Chain Monte Carlo

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 7; Issue 3. Markov Chain Monte Carlo - Examples. Arnab Chakraborty. General Article Volume 7 Issue 3 March 2002 pp 25-34. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/007/03/0025-0034. Keywords.

  6. Current and future applications of Monte Carlo

    International Nuclear Information System (INIS)

    Zaidi, H.

    2003-01-01

    Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic

  7. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis.

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-05

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH 4 + strategy for ethylene and SO 2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO 2 from fruits. It was satisfied that trace ethylene and SO 2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO 2 during the entire LVCC sampling process were proved to be gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Laser ablation inductively coupled plasma mass spectrometry analysis of agricultural soils using the sol-gel technique of pellet preparation

    International Nuclear Information System (INIS)

    Hubova, I.; Hola, M.; Vaculovic, T.; Pinkas, J.; Prokes, L.; Stefan, I.; Kanicky, V.

    2009-01-01

    Full text: Monitoring of metals in agricultural soils is gaining importance as they are accumulated by plants. A LAICPQMS method with Nd:YAG 213 nm laser has been developed for determination of Cr, Ni, Cu, Zn and Pb in soil pellets prepared by the sol-gel technique. LA-ICPMS analysis of archive samples was verified by XRF of wax-soil pellets and ICPMS with nebulization of solutions obtained by total soil decomposition and by analysis of reference materials. Sequention extraction was used for fractionation analysis. (author)

  9. Recommender engine for continuous-time quantum Monte Carlo methods

    Science.gov (United States)

    Huang, Li; Yang, Yi-feng; Wang, Lei

    2017-03-01

    Recommender systems play an essential role in the modern business world. They recommend favorable items such as books, movies, and search queries to users based on their past preferences. Applying similar ideas and techniques to Monte Carlo simulations of physical systems boosts their efficiency without sacrificing accuracy. Exploiting the quantum to classical mapping inherent in the continuous-time quantum Monte Carlo methods, we construct a classical molecular gas model to reproduce the quantum distributions. We then utilize powerful molecular simulation techniques to propose efficient quantum Monte Carlo updates. The recommender engine approach provides a general way to speed up the quantum impurity solvers.

  10. Gas-surface interactions using accommodation coefficients for a dilute and a dense gas in a micro/nano-channel : heat flux predictions using combined molecular dynamics and Monte Carlo techniques

    NARCIS (Netherlands)

    Gaastra - Nedea, S.V.; Steenhoven, van A.A.; Markvoort, A.J.; Spijker, P.; Giordano, D.

    2014-01-01

    The influence of gas-surface interactions of a dilute gas confined between two parallel walls on the heat flux predictions is investigated using a combined Monte Carlo (MC) and molecular dynamics (MD) approach. The accommodation coefficients are computed from the temperature of incident and

  11. Speciation of trace elements in biological samples by nuclear analytical and related techniques coupled with chemical and biochemical separation

    International Nuclear Information System (INIS)

    Chen, C.Y.; Gao, Y.X.; Li, B.; Yu, H.W.; Li, Y.F.; Sun, J.; Chai, Z.F.

    2005-01-01

    In the past, most analytical problems relating to biological systems were addressed by measuring the total concentrations of elements. Now there is increasing interest of the importance of their chemical forms, in which an element is present in biological systems, e.g., the oxidation state, the binding state with macromolecules, or even the molecular structure. The biological effects of chromium, which is classified as an essential nutrient, are dependent upon its oxidation. state. In general, trivalent chromium is biochemically active, whereas hexavalent chromium is considered to be toxic. Mercury is one of serious environmental persistent pollutants. However, organic forms of mercury are known to possess much higher toxicity than inorganic mercury. Therefore, information on speciation is critically required in order to better understanding of their bioavailability, metabolism, transformation, and toxicity in vivo. Recently, chemical speciation of selenium, mercury, copper, zinc, iron, and so on, has been investigated by INAA, ICP-MS, XRF, EXAFS and related techniques combined with chemical and biochemical separation (extraction, chromatography, gel electrophoresis, etc.). INAA, XRF, and ICP-MS have superior advantages in aspect of multielemental analysis with high accuracy and sensitivity, which render the possibility of analyzing various elements of interest simultaneously. These offline or online techniques have been flexibly applied to different biological matrixes, such as human hair, serum, urine, various tissues and organs in our researches. In addition, EXAFS provides structural information about the moiety of metal centers up to a distance of approximately 4-5 Anstrom. For instance, hepatocellular carcinoma (HCC) is one of the most common cancers worldwide. Imbalance of elements, such as Se, Zn, Fe, Cu, Cd, Ca, etc., has been found in the whole blood or serum of patients with HCC. We found that the profiles of Se, Cd, Fe, Zn and Cu-containing proteins

  12. Application of the EPR technique in welded couplings in 08X18H10T (AISI 321) stainless steel

    International Nuclear Information System (INIS)

    Fuentes, D.A.; Menendez, C.M.; Dominguez, H.; Sendoya, F.

    1993-01-01

    Stainless steel samples, one AISI 304 and the other 08X18H10T of Soviet origin (equivalent to AISI 320) were welded for the TIG method, submitted to a thermal treatment in order to its sensitization against the intergranular corrosion, then the samples were submitted to the EPR technique in order to establish the sensitization degree which is an indicative of susceptibility to intergranular corrosion. The result were corroborated by two different methodologies, the ASTM A262 standard and the soviet standard GOST 6032-89. The state of the tested surface was analyzed using optical microscopy in order to quantify the number of pricking since its presence disturbs the normalized charge, Pa. (Author)

  13. Burnup calculations using Monte Carlo method

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Degweker, S.B.

    2009-01-01

    In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code

  14. Sardine (sardina Pilchardus) Larval Dispersal in Northern Canary Current Upwelling System (iberian Peninsula), Using Coupled Biophysical Techniques

    Science.gov (United States)

    Santos, A. M. P. A.; Nieblas, A. E.; Verley, P.; Teles-Machado, A.; Bonhommeau, S.; Lett, C.; Garrido, S.; Peliz, A.

    2017-12-01

    The European sardine (Sardina pilchardus) is the most important small pelagic fishery of the Western Iberia Upwelling Ecosystem (WIUE). Recently, recruitment of this species has declined due to changing environmental conditions. Furthermore, controversies exist regarding its population structure with barriers thought to exist between the Atlantic-Iberian Peninsula, Northern Africa, and the Mediterranean. Few studies have investigated the transport and dispersal of sardine eggs and larvae off Iberia and the subsequent impact on larval recruitment variability. Here, we examine these issues using a Regional Ocean Modeling System climatology (1989-2008) coupled to the Lagrangian transport model, Ichthyop. Using biological parameters from the literature, we conduct simulations that investigate the effects of spawning patchiness, diel vertical migration behaviors, and egg buoyancy on the transport and recruitment of virtual sardine ichthyoplankton on the continental shelf. We find that release area, release depth, and month of release all significantly affect recruitment. Patchiness has no effect and diel vertical migration causes slightly lower recruitment. Egg buoyancy effects are significant and act similarly to depth of release. As with other studies, we find that recruitment peaks vary by latitude, explained here by the seasonal variability of offshore transport. We find weak, continuous alongshore transport between release areas, though a large proportion of simulated ichthyoplankton transport north to the Cantabrian coast (up to 27%). We also show low level transport into Morocco (up to 1%) and the Mediterranean (up to 8%). The high proportion of local retention and low but consistent alongshore transport supports the idea of a series of metapopulations along this coast. This study was supported by the Portuguese Science and Technology Foundation (FCT) through the research project MODELA (PTDC/MAR/098643/2008) and MedEx (MARIN-ERA/MAR/0002/2008). MedEx is also a

  15. General Monte Carlo code MONK

    International Nuclear Information System (INIS)

    Moore, J.G.

    1974-01-01

    The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)

  16. Monte Carlo simulation of experiments

    International Nuclear Information System (INIS)

    Opat, G.I.

    1977-07-01

    An outline of the technique of computer simulation of particle physics experiments by the Monte Carlo method is presented. Useful special purpose subprograms are listed and described. At each stage the discussion is made concrete by direct reference to the programs SIMUL8 and its variant MONTE-PION, written to assist in the analysis of the radiative decay experiments μ + → e + ν sub(e) antiνγ and π + → e + ν sub(e)γ, respectively. These experiments were based on the use of two large sodium iodide crystals, TINA and MINA, as e and γ detectors. Instructions for the use of SIMUL8 and MONTE-PION are given. (author)

  17. Large-volume constant-concentration sampling technique coupling with surface-enhanced Raman spectroscopy for rapid on-site gas analysis

    Science.gov (United States)

    Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke

    2017-08-01

    In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4+ strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS.

  18. Alexander Technique Training Coupled With an Integrative Model of Behavioral Prediction in Teachers With Low Back Pain.

    Science.gov (United States)

    Kamalikhah, Tahereh; Morowatisharifabad, Mohammad Ali; Rezaei-Moghaddam, Farid; Ghasemi, Mohammad; Gholami-Fesharaki, Mohammad; Goklani, Salma

    2016-09-01

    Individuals suffering from chronic low back pain (CLBP) experience major physical, social, and occupational disruptions. Strong evidence confirms the effectiveness of Alexander technique (AT) training for CLBP. The present study applied an integrative model (IM) of behavioral prediction for improvement of AT training. This was a quasi-experimental study of female teachers with nonspecific LBP in southern Tehran in 2014. Group A contained 42 subjects and group B had 35 subjects. In group A, AT lessons were designed based on IM constructs, while in group B, AT lessons only were taught. The validity and reliability of the AT questionnaire were confirmed using content validity (CVR 0.91, CVI 0.96) and Cronbach's α (0.80). The IM constructs of both groups were measured after the completion of training. Statistical analysis used independent and paired samples t-tests and the univariate generalized linear model (GLM). Significant differences were recorded before and after intervention (P < 0.001) for the model constructs of intention, perceived risk, direct attitude, behavioral beliefs, and knowledge in both groups. Direct attitude and behavioral beliefs in group A were higher than in group B after the intervention (P < 0.03). The educational framework provided by IM for AT training improved attitude and behavioral beliefs that can facilitate the adoption of AT behavior and decreased CLBP.

  19. On elastic and elastoplastic analysis of tube junction problems by coupling of the FEM to BEM technique

    International Nuclear Information System (INIS)

    Cen, Z.; Du, Q.

    1987-01-01

    The tube junction structures have been widely adopted for nuclear engineering usages, so have been for many other technologies. In application of the finite element method to stress analysis for such a three dimensional complex structures, it is necessary to subdivide the regions of stress concentration into very refined meshes. In this paper, schemes for incoporating the finite element equation as a natural boundary condition into boundary integral equation have been employed. The relevant formulae and some of the details of treatments have been given. For the nozzle junction: The 3D isoparametric finite elements with 8-20 nodes containing additional internal degrees of freedom have been employed for the cylindrical shell parts which remain at elastic stage and with less stress gradients, while for the junction part with high stress gradients, the boundary integration technique of 8 nodes 2D isoparametric boundary elements has been used and the volumetric integral elements of 8 nodes have been used for the elastoplastic incremental computations. (orig./GL)

  20. Bayesian Monte Carlo method

    International Nuclear Information System (INIS)

    Rajabalinejad, M.

    2010-01-01

    To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.

  1. Contributon Monte Carlo

    International Nuclear Information System (INIS)

    Dubi, A.; Gerstl, S.A.W.

    1979-05-01

    The contributon Monte Carlo method is based on a new recipe to calculate target responses by means of volume integral of the contributon current in a region between the source and the detector. A comprehensive description of the method, its implementation in the general-purpose MCNP code, and results of the method for realistic nonhomogeneous, energy-dependent problems are presented. 23 figures, 10 tables

  2. Carlos Vesga Duarte

    Directory of Open Access Journals (Sweden)

    Pedro Medina Avendaño

    1981-01-01

    Full Text Available Carlos Vega Duarte tenía la sencillez de los seres elementales y puros. Su corazón era limpio como oro de aluvión. Su trato directo y coloquial ponía de relieve a un santandereano sin contaminaciones que amaba el fulgor de las armas y se encandilaba con el destello de las frases perfectas

  3. Fundamentals of Monte Carlo

    International Nuclear Information System (INIS)

    Wollaber, Allan Benton

    2016-01-01

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating @@), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  4. Microcanonical Monte Carlo

    International Nuclear Information System (INIS)

    Creutz, M.

    1986-01-01

    The author discusses a recently developed algorithm for simulating statistical systems. The procedure interpolates between molecular dynamics methods and canonical Monte Carlo. The primary advantages are extremely fast simulations of discrete systems such as the Ising model and a relative insensitivity to random number quality. A variation of the algorithm gives rise to a deterministic dynamics for Ising spins. This model may be useful for high speed simulation of non-equilibrium phenomena

  5. Fundamentals of Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Wollaber, Allan Benton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  6. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...

  7. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...

  8. Who Writes Carlos Bulosan?

    Directory of Open Access Journals (Sweden)

    Charlie Samuya Veric

    2001-12-01

    Full Text Available The importance of Carlos Bulosan in Filipino and Filipino-American radical history and literature is indisputable. His eminence spans the pacific, and he is known, diversely, as a radical poet, fictionist, novelist, and labor organizer. Author of the canonical America Iis the Hearts, Bulosan is celebrated for chronicling the conditions in America in his time, such as racism and unemployment. In the history of criticism on Bulosan's life and work, however, there is an undeclared general consensus that views Bulosan and his work as coherent permanent texts of radicalism and anti-imperialism. Central to the existence of such a tradition of critical reception are the generations of critics who, in more ways than one, control the discourse on and of Carlos Bulosan. This essay inquires into the sphere of the critical reception that orders, for our time and for the time ahead, the reading and interpretation of Bulosan. What eye and seeing, the essay asks, determine the perception of Bulosan as the angel of radicalism? What is obscured in constructing Bulosan as an immutable figure of the political? What light does the reader conceive when the personal is brought into the open and situated against the political? the essay explores the answers to these questions in Bulosan's loving letters to various friends, strangers, and white American women. The presence of these interrogations, the essay believes, will secure ultimately the continuing importance of Carlos Bulosan to radical literature and history.

  9. Status of Monte Carlo at Los Alamos

    International Nuclear Information System (INIS)

    Thompson, W.L.; Cashwell, E.D.

    1980-01-01

    At Los Alamos the early work of Fermi, von Neumann, and Ulam has been developed and supplemented by many followers, notably Cashwell and Everett, and the main product today is the continuous-energy, general-purpose, generalized-geometry, time-dependent, coupled neutron-photon transport code called MCNP. The Los Alamos Monte Carlo research and development effort is concentrated in Group X-6. MCNP treats an arbitrary three-dimensional configuration of arbitrary materials in geometric cells bounded by first- and second-degree surfaces and some fourth-degree surfaces (elliptical tori). Monte Carlo has evolved into perhaps the main method for radiation transport calculations at Los Alamos. MCNP is used in every technical division at the Laboratory by over 130 users about 600 times a month accounting for nearly 200 hours of CDC-7600 time

  10. Linear filtering applied to Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Morrison, G.W.; Pike, D.H.; Petrie, L.M.

    1975-01-01

    A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed

  11. Improving thermal efficiency and increasing production rate in the double moving beds thermally coupled reactors by using differential evolution (DE) technique

    International Nuclear Information System (INIS)

    Karimi, Mohsen; Rahimpour, Mohammad Reza; Rafiei, Razieh; Shariati, Alireza; Iranshahi, Davood

    2016-01-01

    Highlights: • Double moving bed thermally coupled reactor is modeled in two dimensions. • The required heat of naphtha process is attained with nitrobenzene hydrogenation. • DE optimization method is applied to optimize operating conditions. • Hydrogen, aromatic and aniline productions increase in the proposed configuration. - Abstract: According to the global requirements for energy saving and the control of global warming, multifunctional auto-thermal reactors as a novel concept in the process integration (PI) have risen up in the recent years. In the novel modification presented in this study, the required heat of endothermic naphtha reforming process has been supplied by nitrobenzene hydrogenation reaction. In addition, the enhancement of reactor performance, such as the increase of production rate, has become a key issue in the diverse industries. Thus, Differential Evolution (DE) technique is applied to optimize the operating conditions (temperature and pressure) and designing parameters of a thermally coupled reactor with double moving beds. Ultimately, the obtained results of the proposed model are compared with non-optimized and conventional model. This model results in noticeable reduction in the operational costs as well as enhancement of the net profit of the plant. The increase in the hydrogen and aromatic production shows the superiority of the proposed model.

  12. Clinical efficacy of a combination of Percoll continuous density gradient and swim-up techniques for semen processing in HIV-1 serodiscordant couples

    Directory of Open Access Journals (Sweden)

    Osamu Inoue

    2017-01-01

    Full Text Available To evaluate the clinical efficacy of a procedure comprising a combination of Percoll continuous density gradient and modified swim-up techniques for the removal of human immunodeficiency virus type 1 (HIV-1 from the semen of HIV-1 infected males, a total of 129 couples with an HIV-1 positive male partner and an HIV-1 negative female partner (serodiscordant couples who were treated at Keio University Hospital between January 2002 and April 2012 were examined. A total of 183 ejaculates from 129 HIV-1 infected males were processed. After swim-up, we successfully collected motile sperms at a recovery rate as high as 100.0% in cases of normozoospermia (126/126 ejaculates, oligozoospermia (6/6, and asthenozoospermia (36/36. The recovery rate of oligoasthenozoospermia was 86.7% (13/15. In processed semen only four ejaculates (4/181:2.2% showed viral nucleotide sequences consistent with those in the blood of the infected males. After using these sperms, no horizontal infections of the female patients and no vertical infections of the newborns were observed. Furthermore, no obvious adverse effects were observed in the offspring. This protocol allowed us to collect HIV-1 negative motile sperms at a high rate, even in male factor cases. We concluded that our protocol is clinically effective both for decreasing HIV-1 infections and for yielding a healthy child.

  13. A study to determine the differences between the displayed dose values for two full-field digital mammography units and values calculated using a range of Monte-Carlo-based techniques: A phantom study

    International Nuclear Information System (INIS)

    Borg, M.; Badr, I.; Royle, G. J.

    2013-01-01

    Modern full-field digital mammography (FFDM) units display the mean glandular dose (MGD) and the entrance or incident air kerma (K) to the breast following each exposure. Information on how these values are calculated is limited and knowing how displayed MGD values compare and correlate to conventional Monte-Carlo-based methods is useful. From measurements done on polymethyl methacrylate (PMMA) phantoms, it has been shown that displayed and calculated MGD values are similar for thin to medium thicknesses and appear to differ with larger PMMA thicknesses. As a result, a multiple linear regression analysis on the data was performed to generate models by which displayed MGD values on the two FFDM units included in the study may be converted to the Monte-Carlo values calculated by conventional methods. These models should be a useful tool for medical physicists requiring MGD data from FFDM units included in this paper and should reduce the survey time spent on dose calculations. (authors)

  14. An evaluation and comparison of intraventricular, intraparenchymal, and fluid-coupled techniques for intracranial pressure monitoring in patients with severe traumatic brain injury.

    Science.gov (United States)

    Vender, John; Waller, Jennifer; Dhandapani, Krishnan; McDonnell, Dennis

    2011-08-01

    Intracranial pressure measurements have become one of the mainstays of traumatic brain injury management. Various technologies exist to monitor intracranial pressure from a variety of locations. Transducers are usually placed to assess pressure in the brain parenchyma and the intra-ventricular fluid, which are the two most widely accepted compartmental monitoring sites. The individual reliability and inter-reliability of these devices with and without cerebrospinal fluid diversion is not clear. The predictive capability of monitors in both of these sites to local, regional, and global changes also needs further clarification. The technique of monitoring intraventricular pressure with a fluid-coupled transducer system is also reviewed. There has been little investigation into the relationship among pressure measurements obtained from these two sources using these three techniques. Eleven consecutive patients with severe, closed traumatic brain injury not requiring intracranial mass lesion evacuation were admitted into this prospective study. Each patient underwent placement of a parenchymal and intraventricular pressure monitor. The ventricular catheter tubing was also connected to a sensor for fluid-coupled measurement. Pressure from all three sources was measured hourly with and without ventricular drainage. Statistically significant correlation within each monitoring site was seen. No monitoring location was more predictive of global pressure changes or more responsive to pressure changes related to patient stimulation. However, the intraventricular pressure measurements were not reliable in the presence of cerebrospinal fluid drainage whereas the parenchymal measurements remained unaffected. Intraparenchymal pressure monitoring provides equivalent, statistically similar pressure measurements when compared to intraventricular monitors in all care and clinical settings. This is particularly valuable when uninterrupted cerebrospinal fluid drainage is desirable.

  15. Physicochemical characterization of titanium dioxide pigments using various techniques for size determination and asymmetric flow field flow fractionation hyphenated with inductively coupled plasma mass spectrometry.

    Science.gov (United States)

    Helsper, Johannes P F G; Peters, Ruud J B; van Bemmel, Margaretha E M; Rivera, Zahira E Herrera; Wagner, Stephan; von der Kammer, Frank; Tromp, Peter C; Hofmann, Thilo; Weigel, Stefan

    2016-09-01

    Seven commercial titanium dioxide pigments and two other well-defined TiO2 materials (TiMs) were physicochemically characterised using asymmetric flow field flow fractionation (aF4) for separation, various techniques to determine size distribution and inductively coupled plasma mass spectrometry (ICPMS) for chemical characterization. The aF4-ICPMS conditions were optimised and validated for linearity, limit of detection, recovery, repeatability and reproducibility, all indicating good performance. Multi-element detection with aF4-ICPMS showed that some commercial pigments contained zirconium co-eluting with titanium in aF4. The other two TiMs, NM103 and NM104, contained aluminium as integral part of the titanium peak eluting in aF4. The materials were characterised using various size determination techniques: retention time in aF4, aF4 hyphenated with multi-angle laser light spectrometry (MALS), single particle ICPMS (spICPMS), scanning electron microscopy (SEM) and particle tracking analysis (PTA). PTA appeared inappropriate. For the other techniques, size distribution patterns were quite similar, i.e. high polydispersity with diameters from 20 to >700 nm, a modal peak between 200 and 500 nm and a shoulder at 600 nm. Number-based size distribution techniques as spICPMS and SEM showed smaller modal diameters than aF4-UV, from which mass-based diameters are calculated. With aF4-MALS calculated, light-scattering-based "diameters of gyration" (Øg) are similar to hydrodynamic diameters (Øh) from aF4-UV analyses and diameters observed with SEM, but much larger than with spICPMS. A Øg/Øh ratio of about 1 indicates that the TiMs are oblate spheres or fractal aggregates. SEM observations confirm the latter structure. The rationale for differences in modal peak diameter is discussed.

  16. Ferromagnetic Spin Coupling as the Origin of 0.7 Anomaly in Quantum Point Contacts

    OpenAIRE

    Aryanpour, K.; Han, J. E.

    2008-01-01

    We study one-dimensional itinerant electron models with ferromagnetic coupling to investigate the origin of 0.7 anomaly in quantum point contacts. Linear conductance calculations from the quantum Monte Carlo technique for spin interactions of different spatial range suggest that $0.7(2e^{2}/h)$ anomaly results from a strong interaction of low-density conduction electrons to ferromagnetic fluctuations formed across the potential barrier. The conductance plateau appears due to the strong incohe...

  17. Application of a microwave-based desolvation system for multi-elemental analysis of wine by inductively coupled plasma based techniques

    Energy Technology Data Exchange (ETDEWEB)

    Grindlay, Guillermo [Department of Analytical Chemistry, Nutrition and Food Sciences, University of Alicante, P.O. Box 99, 03080 Alicante (Spain)], E-mail: guillermo.grindlay@ua.es; Mora, Juan; Maestre, Salvador; Gras, Luis [Department of Analytical Chemistry, Nutrition and Food Sciences, University of Alicante, P.O. Box 99, 03080 Alicante (Spain)

    2008-11-23

    Elemental wine analysis is often required from a nutritional, toxicological, origin and authenticity point of view. Inductively coupled plasma based techniques are usually employed for this analysis because of their multi-elemental capabilities and good limits of detection. However, the accurate analysis of wine samples strongly depends on their matrix composition (i.e. salts, ethanol, organic acids) since they lead to both spectral and non-spectral interferences. To mitigate ethanol (up to 10% w/w) related matrix effects in inductively coupled plasma atomic emission spectrometry (ICP-AES), a microwave-based desolvation system (MWDS) can be successfully employed. This finding suggests that the MWDS could be employed for elemental wine analysis. The goal of this work is to evaluate the applicability of the MWDS for elemental wine analysis in ICP-AES and inductively coupled plasma mass spectrometry (ICP-MS). For the sake of comparison a conventional sample introduction system (i.e. pneumatic nebulizer attached to a spray chamber) was employed. Matrix effects, precision, accuracy and analysis throughput have been selected as comparison criteria. For ICP-AES measurements, wine samples can be directly analyzed without any sample treatment (i.e. sample dilution or digestion) using pure aqueous standards although internal standardization (IS) (i.e. Sc) is required. The behaviour of the MWDS operating with organic solutions in ICP-MS has been characterized for the first time. In this technique the MWDS has shown its efficiency to mitigate ethanol related matrix effects up to concentrations of 1% (w/w). Therefore, wine samples must be diluted to reduce the ethanol concentration up to this value. The results obtained have shown that the MWDS is a powerful device for the elemental analysis of wine samples in both ICP-AES and ICP-MS. In general, the MWDS has some attractive advantages for elemental wine analysis when compared to a conventional sample introduction system such

  18. Application of a microwave-based desolvation system for multi-elemental analysis of wine by inductively coupled plasma based techniques

    International Nuclear Information System (INIS)

    Grindlay, Guillermo; Mora, Juan; Maestre, Salvador; Gras, Luis

    2008-01-01

    Elemental wine analysis is often required from a nutritional, toxicological, origin and authenticity point of view. Inductively coupled plasma based techniques are usually employed for this analysis because of their multi-elemental capabilities and good limits of detection. However, the accurate analysis of wine samples strongly depends on their matrix composition (i.e. salts, ethanol, organic acids) since they lead to both spectral and non-spectral interferences. To mitigate ethanol (up to 10% w/w) related matrix effects in inductively coupled plasma atomic emission spectrometry (ICP-AES), a microwave-based desolvation system (MWDS) can be successfully employed. This finding suggests that the MWDS could be employed for elemental wine analysis. The goal of this work is to evaluate the applicability of the MWDS for elemental wine analysis in ICP-AES and inductively coupled plasma mass spectrometry (ICP-MS). For the sake of comparison a conventional sample introduction system (i.e. pneumatic nebulizer attached to a spray chamber) was employed. Matrix effects, precision, accuracy and analysis throughput have been selected as comparison criteria. For ICP-AES measurements, wine samples can be directly analyzed without any sample treatment (i.e. sample dilution or digestion) using pure aqueous standards although internal standardization (IS) (i.e. Sc) is required. The behaviour of the MWDS operating with organic solutions in ICP-MS has been characterized for the first time. In this technique the MWDS has shown its efficiency to mitigate ethanol related matrix effects up to concentrations of 1% (w/w). Therefore, wine samples must be diluted to reduce the ethanol concentration up to this value. The results obtained have shown that the MWDS is a powerful device for the elemental analysis of wine samples in both ICP-AES and ICP-MS. In general, the MWDS has some attractive advantages for elemental wine analysis when compared to a conventional sample introduction system such

  19. Automated Monte Carlo biasing for photon-generated electrons near surfaces.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Crawford, Martin James; Kensek, Ronald Patrick

    2009-09-01

    This report describes efforts to automate the biasing of coupled electron-photon Monte Carlo particle transport calculations. The approach was based on weight-windows biasing. Weight-window settings were determined using adjoint-flux Monte Carlo calculations. A variety of algorithms were investigated for adaptivity of the Monte Carlo tallies. Tree data structures were used to investigate spatial partitioning. Functional-expansion tallies were used to investigate higher-order spatial representations.

  20. Monte Carlo applications to radiation shielding problems

    International Nuclear Information System (INIS)

    Subbaiah, K.V.

    2009-01-01

    transport in complex geometries is straightforward, while even the simplest finite geometries (e.g., thin foils) are very difficult to be dealt with by the transport equation. The main drawback of the Monte Carlo method lies in its random nature: all the results are affected by statistical uncertainties, which can be reduced at the expense of increasing the sampled population, and, hence, the computation time. Under special circumstances, the statistical uncertainties may be lowered by using variance-reduction techniques. Monte Carlo methods tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm. The term Monte Carlo was coined in the 1940s by physicists working on nuclear weapon projects in the Los Alamos National Laboratory

  1. Monte Carlo methods for shield design calculations

    International Nuclear Information System (INIS)

    Grimstone, M.J.

    1974-01-01

    A suite of Monte Carlo codes is being developed for use on a routine basis in commercial reactor shield design. The methods adopted for this purpose include the modular construction of codes, simplified geometries, automatic variance reduction techniques, continuous energy treatment of cross section data, and albedo methods for streaming. Descriptions are given of the implementation of these methods and of their use in practical calculations. 26 references. (U.S.)

  2. Time step length versus efficiency of Monte Carlo burnup calculations

    International Nuclear Information System (INIS)

    Dufek, Jan; Valtavirta, Ville

    2014-01-01

    Highlights: • Time step length largely affects efficiency of MC burnup calculations. • Efficiency of MC burnup calculations improves with decreasing time step length. • Results were obtained from SIE-based Monte Carlo burnup calculations. - Abstract: We demonstrate that efficiency of Monte Carlo burnup calculations can be largely affected by the selected time step length. This study employs the stochastic implicit Euler based coupling scheme for Monte Carlo burnup calculations that performs a number of inner iteration steps within each time step. In a series of calculations, we vary the time step length and the number of inner iteration steps; the results suggest that Monte Carlo burnup calculations get more efficient as the time step length is reduced. More time steps must be simulated as they get shorter; however, this is more than compensated by the decrease in computing cost per time step needed for achieving a certain accuracy

  3. A Monte Carlo method for the simulation of coagulation and nucleation based on weighted particles and the concepts of stochastic resolution and merging

    Energy Technology Data Exchange (ETDEWEB)

    Kotalczyk, G., E-mail: Gregor.Kotalczyk@uni-due.de; Kruis, F.E.

    2017-07-01

    Monte Carlo simulations based on weighted simulation particles can solve a variety of population balance problems and allow thus to formulate a solution-framework for many chemical engineering processes. This study presents a novel concept for the calculation of coagulation rates of weighted Monte Carlo particles by introducing a family of transformations to non-weighted Monte Carlo particles. The tuning of the accuracy (named ‘stochastic resolution’ in this paper) of those transformations allows the construction of a constant-number coagulation scheme. Furthermore, a parallel algorithm for the inclusion of newly formed Monte Carlo particles due to nucleation is presented in the scope of a constant-number scheme: the low-weight merging. This technique is found to create significantly less statistical simulation noise than the conventional technique (named ‘random removal’ in this paper). Both concepts are combined into a single GPU-based simulation method which is validated by comparison with the discrete-sectional simulation technique. Two test models describing a constant-rate nucleation coupled to a simultaneous coagulation in 1) the free-molecular regime or 2) the continuum regime are simulated for this purpose.

  4. Monte Carlo simulation of activity measurements by means of 4πβ-γ coincidence system

    International Nuclear Information System (INIS)

    Takeda, Mauro N.; Dias, Mauro S.; Koskinas, Marina F.

    2004-01-01

    The methodology for simulating all detection processes in a 4πβ-γ coincidence system by means of the Monte Carlo technique is described. The goal is to predict the behavior of the observed activity as a function of the 4πβ detector efficiency. In this approach, the information contained in the decay scheme is used for determining the contribution of all radiations emitted by the selected radionuclide, to the measured spectra by each detector. This simulation yields the shape of the coincidence spectrum, allowing the choice of suitable gamma-ray windows for which the activity can be obtained with maximum accuracy. The simulation can predict a detailed description of the extrapolation curve, mainly in the region where the 4πβ detector efficiency approaches 100%, which is experimentally unreachable due to self absorption of low energy electrons in the radioactive source substrate. The theoretical work is being developed with MCNP Monte Carlo code, applied to a gas-flow proportional counter of 4π geometry, coupled to a pair of NaI(Tl) crystals. The calculated efficiencies are compared to experimental results. The extrapolation curve can be obtained by means of another Monte Carlo algorithm, being developed in the present work, to take into account fundamental characteristics of a complex decay scheme, including different types of radiation and transitions. The present paper shows preliminary calculated values obtained by the simulation and compared to predicted analytical values for a simple decay scheme. (author)

  5. Kinetics of electron-positron pair plasmas using an adaptive Monte Carlo method

    International Nuclear Information System (INIS)

    Pilla, R.P.; Shaham, J.

    1997-01-01

    A new algorithm for implementing the adaptive Monte Carlo method is given. It is used to solve the Boltzmann equations that describe the time evolution of a nonequilibrium electron-positron pair plasma containing high-energy photons. These are coupled nonlinear integro-differential equations. The collision kernels for the photons as well as pairs are evaluated for Compton scattering, pair annihilation and creation, bremsstrahlung, and Coulomb collisions. They are given as multidimensional integrals which are valid for all energies. For an homogeneous and isotropic plasma with no particle escape, the equilibrium solution is expressed analytically in terms of the initial conditions. For two specific cases, for which the photon and the pair spectra are initially constant or have a power-law distribution within the given limits, the time evolution of the plasma is analyzed using the new method. The final spectra are found to be in a good agreement with the analytical solutions. The new algorithm is faster than the Monte Carlo scheme based on uniform sampling and more flexible than the numerical methods used in the past, which do not involve Monte Carlo sampling. It is also found to be very stable. Some astrophysical applications of this technique are discussed. copyright 1997 The American Astronomical Society

  6. Severe Hemolysis in a Patient With Erythrocytosis During Coupled Plasma Filtration Adsorption Therapy Was Prevented by Changing From Membrane-Based Technique to a Centrifuge-Based One.

    Science.gov (United States)

    Fan, Rong; Wu, Buyun; Kong, Ling; Gong, Dehua

    2016-01-01

    Coupled plasma filtration adsorption (CPFA) usually adopts membrane to separate plasma from blood. Here, we reported a case with erythrocytosis experienced severe hemolysis and membrane rupture during CPFA, which was avoided by changing from membrane-based technique to a centrifuge-based one. A 66-year-old man was to receive CPFA for severe hyperbilirubinemia (total bilirubin 922 μmol/L, direct bilirubin 638 μmol/L) caused by obstruction of biliary tract. He had erythrocytosis (hemoglobin 230 g/L, hematocrit 0.634) for years because of untreated tetralogy of Fallot. Severe hemolysis and membrane rupture occurred immediately after blood entering into the plasma separator even at a low flow rate (50 mL/min) and persisted after changing a new separator. Finally, centrifugal plasma separation technique was used for CPFA in this patient, and no hemolysis occurred. After 3 sessions of CPFA, total bilirubin level decreased to 199 μmol/L with an average decline by 35% per session. Thereafter, the patient received endoscopic biliary stent implantation, and total bilirubin level returned to nearly normal. Therefore, centrifugal-based plasma separation can also be used in CPFA and may be superior to a membrane-based one in patients with hyperviscosity.

  7. Constructing a framework for risk analyses of climate change effects on the water budget of differently sloped vineyards with a numeric simulation using the Monte Carlo method coupled to a water balance model

    Directory of Open Access Journals (Sweden)

    Marco eHofmann

    2014-12-01

    Full Text Available Grapes for wine production are a highly climate sensitive crop and vineyard water budget is a decisive factor in quality formation. In order to conduct risk assessments for climate change effects in viticulture models are needed which can be applied to complete growing regions. We first modified an existing simplified geometric vineyard model of radiation interception and resulting water use to incorporate numerical Monte Carlo simulations and the physical aspects of radiation interactions between canopy and vineyard slope and azimuth. We then used four regional climate models to assess for possible effects on the water budget of selected vineyard sites up 2100. The model was developed to describe the partitioning of short-wave radiation between grapevine canopy and soil surface, respectively green cover, necessary to calculate vineyard evapotranspiration. Soil water storage was allocated to two sub reservoirs. The model was adopted for steep slope vineyards based on coordinate transformation and validated against measurements of grapevine sap flow and soil water content determined down to 1.6 m depth at three different sites over two years. The results showed good agreement of modelled and observed soil water dynamics of vineyards with large variations in site specific soil water holding capacity and viticultural management. Simulated sap flow was in overall good agreement with measured sap flow but site-specific responses of sap flow to potential evapotranspiration were observed. The analyses of climate change impacts on vineyard water budget demonstrated the importance of site-specific assessment due to natural variations in soil water holding capacity. The improved model was capable of describing seasonal and site-specific dynamics in soil water content and could be used in an amended version to estimate changes in the water budget of entire grape growing areas due to evolving climatic changes.

  8. Constructing a framework for risk analyses of climate change effects on the water budget of differently sloped vineyards with a numeric simulation using the Monte Carlo method coupled to a water balance model.

    Science.gov (United States)

    Hofmann, Marco; Lux, Robert; Schultz, Hans R

    2014-01-01

    Grapes for wine production are a highly climate sensitive crop and vineyard water budget is a decisive factor in quality formation. In order to conduct risk assessments for climate change effects in viticulture models are needed which can be applied to complete growing regions. We first modified an existing simplified geometric vineyard model of radiation interception and resulting water use to incorporate numerical Monte Carlo simulations and the physical aspects of radiation interactions between canopy and vineyard slope and azimuth. We then used four regional climate models to assess for possible effects on the water budget of selected vineyard sites up 2100. The model was developed to describe the partitioning of short-wave radiation between grapevine canopy and soil surface, respectively, green cover, necessary to calculate vineyard evapotranspiration. Soil water storage was allocated to two sub reservoirs. The model was adopted for steep slope vineyards based on coordinate transformation and validated against measurements of grapevine sap flow and soil water content determined down to 1.6 m depth at three different sites over 2 years. The results showed good agreement of modeled and observed soil water dynamics of vineyards with large variations in site specific soil water holding capacity (SWC) and viticultural management. Simulated sap flow was in overall good agreement with measured sap flow but site-specific responses of sap flow to potential evapotranspiration were observed. The analyses of climate change impacts on vineyard water budget demonstrated the importance of site-specific assessment due to natural variations in SWC. The improved model was capable of describing seasonal and site-specific dynamics in soil water content and could be used in an amended version to estimate changes in the water budget of entire grape growing areas due to evolving climatic changes.

  9. Combinatorial nuclear level density by a Monte Carlo method

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations

  10. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    CERN Document Server

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  11. The Monte Carlo method the method of statistical trials

    CERN Document Server

    Shreider, YuA

    1966-01-01

    The Monte Carlo Method: The Method of Statistical Trials is a systematic account of the fundamental concepts and techniques of the Monte Carlo method, together with its range of applications. Some of these applications include the computation of definite integrals, neutron physics, and in the investigation of servicing processes. This volume is comprised of seven chapters and begins with an overview of the basic features of the Monte Carlo method and typical examples of its application to simple problems in computational mathematics. The next chapter examines the computation of multi-dimensio

  12. Development of a Fourier transform infrared spectroscopy coupled to UV-Visible analysis technique for aminosides and glycopeptides quantitation in antibiotic locks.

    Science.gov (United States)

    Sayet, G; Sinegre, M; Ben Reguiga, M

    2014-01-01

    Antibiotic Lock technique maintains catheters' sterility in high-risk patients with long-term parenteral nutrition. In our institution, vancomycin, teicoplanin, amikacin and gentamicin locks are prepared in the pharmaceutical department. In order to insure patient safety and to comply to regulatory requirements, antibiotic locks are submitted to qualitative and quantitative assays prior to their release. The aim of this study was to develop an alternative quantitation technique for each of these 4 antibiotics, using a Fourier transform infrared (FTIR) coupled to UV-Visible spectroscopy and to compare results to HPLC or Immunochemistry assays. Prevalidation studies permitted to assess spectroscopic conditions used for antibiotic locks quantitation: FTIR/UV combinations were used for amikacin (1091-1115cm(-1) and 208-224nm), vancomycin (1222-1240cm(-1) and 276-280nm), and teicoplanin (1226-1230cm(-1) and 278-282nm). Gentamicin was quantified with FTIR only (1045-1169cm(-1) and 2715-2850cm(-1)) due to interferences in UV domain of parabens, preservatives present in the commercial brand used to prepare locks. For all AL, the method was linear (R(2)=0.996 to 0.999), accurate, repeatable (intraday RSD%: from 2.9 to 7.1% and inter-days RSD%: 2.9 to 5.1%) and precise. Compared to the reference methods, the FTIR/UV method appeared tightly correlated (Pearson factor: 97.4 to 99.9%) and did not show significant difference in recovery determinations. We developed a new simple reliable analysis technique for antibiotics quantitation in locks using an original association of FTIR and UV analysis, allowing a short time analysis to identify and quantify the studied antibiotics. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  13. Monte Carlo method for solving a parabolic problem

    Directory of Open Access Journals (Sweden)

    Tian Yi

    2016-01-01

    Full Text Available In this paper, we present a numerical method based on random sampling for a parabolic problem. This method combines use of the Crank-Nicolson method and Monte Carlo method. In the numerical algorithm, we first discretize governing equations by Crank-Nicolson method, and obtain a large sparse system of linear algebraic equations, then use Monte Carlo method to solve the linear algebraic equations. To illustrate the usefulness of this technique, we apply it to some test problems.

  14. Propagation of Nuclear Data Uncertainties in Integral Measurements by Monte-Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Noguere, G.; Bernard, D.; De Saint-Jean, C. [CEA Cadarache, 13 - Saint Paul lez Durance (France)

    2006-07-01

    Full text of the publication follows: The generation of Multi-group cross sections together with relevant uncertainties is fundamental to assess the quality of integral data. The key information that are needed to propagate the microscopic experimental uncertainties to macroscopic reactor calculations are (1) the experimental covariance matrices, (2) the correlations between the parameters of the model and (3) the covariance matrices for the multi-group cross sections. The propagation of microscopic errors by Monte-Carlo technique was applied to determine the accuracy of the integral trends provided by the OSMOSE experiment carried out in the MINERVE reactor of the CEA Cadarache. The technique consists in coupling resonance shape analysis and deterministic codes. The integral trend and its accuracy obtained on the {sup 237}Np(n,{gamma}) reaction will be presented. (author)

  15. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    International Nuclear Information System (INIS)

    Smith, L.M.; Hochstedler, R.D.

    1997-01-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code)

  16. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    Science.gov (United States)

    Smith, L. M.; Hochstedler, R. D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

  17. Monte Carlo Simulation of an American Option

    Directory of Open Access Journals (Sweden)

    Gikiri Thuo

    2007-04-01

    Full Text Available We implement gradient estimation techniques for sensitivity analysis of option pricing which can be efficiently employed in Monte Carlo simulation. Using these techniques we can simultaneously obtain an estimate of the option value together with the estimates of sensitivities of the option value to various parameters of the model. After deriving the gradient estimates we incorporate them in an iterative stochastic approximation algorithm for pricing an option with early exercise features. We illustrate the procedure using an example of an American call option with a single dividend that is analytically tractable. In particular we incorporate estimates for the gradient with respect to the early exercise threshold level.

  18. Biased Monte Carlo optimization: the basic approach

    International Nuclear Information System (INIS)

    Campioni, Luca; Scardovelli, Ruben; Vestrucci, Paolo

    2005-01-01

    It is well-known that the Monte Carlo method is very successful in tackling several kinds of system simulations. It often happens that one has to deal with rare events, and the use of a variance reduction technique is almost mandatory, in order to have Monte Carlo efficient applications. The main issue associated with variance reduction techniques is related to the choice of the value of the biasing parameter. Actually, this task is typically left to the experience of the Monte Carlo user, who has to make many attempts before achieving an advantageous biasing. A valuable result is provided: a methodology and a practical rule addressed to establish an a priori guidance for the choice of the optimal value of the biasing parameter. This result, which has been obtained for a single component system, has the notable property of being valid for any multicomponent system. In particular, in this paper, the exponential and the uniform biases of exponentially distributed phenomena are investigated thoroughly

  19. Nonlinear Monte Carlo model of superdiffusive shock acceleration with magnetic field amplification

    Science.gov (United States)

    Bykov, Andrei M.; Ellison, Donald C.; Osipov, Sergei M.

    2017-03-01

    Fast collisionless shocks in cosmic plasmas convert their kinetic energy flow into the hot downstream thermal plasma with a substantial fraction of energy going into a broad spectrum of superthermal charged particles and magnetic fluctuations. The superthermal particles can penetrate into the shock upstream region producing an extended shock precursor. The cold upstream plasma flow is decelerated by the force provided by the superthermal particle pressure gradient. In high Mach number collisionless shocks, efficient particle acceleration is likely coupled with turbulent magnetic field amplification (MFA) generated by the anisotropic distribution of accelerated particles. This anisotropy is determined by fast particle transport, making the problem strongly nonlinear and multiscale. Here, we present a nonlinear Monte Carlo model of collisionless shock structure with superdiffusive propagation of high-energy Fermi accelerated particles coupled to particle acceleration and MFA, which affords a consistent description of strong shocks. A distinctive feature of the Monte Carlo technique is that it includes the full angular anisotropy of the particle distribution at all precursor positions. The model reveals that the superdiffusive transport of energetic particles (i.e., Lévy-walk propagation) generates a strong quadruple anisotropy in the precursor particle distribution. The resultant pressure anisotropy of the high-energy particles produces a nonresonant mirror-type instability that amplifies compressible wave modes with wavelengths longer than the gyroradii of the highest-energy protons produced by the shock.

  20. Atomic scale Monte Carlo simulations of BF3 plasma immersion ion implantation in Si

    International Nuclear Information System (INIS)

    La Magna, Antonino; Fisicaro, Giuseppe; Nicotra, Giuseppe; Spiegel, Yohann; Torregrosa, Frank

    2014-01-01

    We present a numerical model aimed to accurately simulate the plasma immersion ion implantation (PIII) process in micro and nano-patterned Si samples. The code, based on the Monte Carlo approach, is designed to reproduce all the relevant physical phenomena involved in the process. The particle based simulation technique is fundamental to efficiently compute the material modifications promoted by the plasma implantation at the atomic resolution. The accuracy in the description of the process kinetic is achieved linking (one to one) each virtual Monte Carlo event to each possible atomic phenomenon (e.g. ion penetration, neutral absorption, ion induced surface modification, etc.). The code is designed to be coupled with a generic plasma status, characterized by the particle types (ions and neutrals), their flow rates and their energy/angle distributions. The coupling with a Poisson solver allows the simulation of the correct trajectories of charged particles in the void regions of the micro-structures. The implemented model is able to predict the implantation 2D profiles and significantly support the process design. (copyright 2014 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  1. Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations

    DEFF Research Database (Denmark)

    Kamran, Faisal; Andersen, Peter E.

    2015-01-01

    profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...

  2. Fourier path-integral Monte Carlo methods: Partial averaging

    International Nuclear Information System (INIS)

    Doll, J.D.; Coalson, R.D.; Freeman, D.L.

    1985-01-01

    Monte Carlo Fourier path-integral techniques are explored. It is shown that fluctuation renormalization techniques provide an effective means for treating the effects of high-order Fourier contributions. The resulting formalism is rapidly convergent, is computationally convenient, and has potentially useful variational aspects

  3. Calibration of a gamma spectrometer for natural radioactivity measurement. Experimental measurements and Monte Carlo modelling; Etalonnage d'un spectrometre gamma en vue de la mesure de la radioactivite naturelle. Mesures experimentales et modelisation par techniques de Monte-Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Courtine, Fabien [Laboratoire de Physique Corpusculaire, Universite Blaise Pascal - CNRS/IN2P3, 63000 Aubiere Cedex (France)

    2007-03-15

    The thesis proceeded in the context of dating by thermoluminescence. This method requires laboratory measurements of the natural radioactivity. For that purpose, we have been using a germanium spectrometer. To refine the calibration of this one, we modelled it by using a Monte-Carlo computer code: Geant4. We developed a geometrical model which takes into account the presence of inactive zones and zones of poor charge-collection within the germanium crystal. The parameters of the model were adjusted by comparison with experimental results obtained with a source of {sup 137}Cs. It appeared that the form of the inactive zones is less simple than is presented in the specialized literature. This model was widened to the case of a more complex source, with cascade effect and angular correlations between photons: the {sup 60}Co. Lastly, applied to extended sources, it gave correct results and allowed us to validate the simulation of matrix effect. (author)

  4. A comparison of sorptive extraction techniques coupled to a new quantitative, sensitive, high throughput GC-MS/MS method for methoxypyrazine analysis in wine.

    Science.gov (United States)

    Hjelmeland, Anna K; Wylie, Philip L; Ebeler, Susan E

    2016-02-01

    Methoxypyrazines are volatile compounds found in plants, microbes, and insects that have potent vegetal and earthy aromas. With sensory detection thresholds in the low ng L(-1) range, modest concentrations of these compounds can profoundly impact the aroma quality of foods and beverages, and high levels can lead to consumer rejection. The wine industry routinely analyzes the most prevalent methoxypyrazine, 2-isobutyl-3-methoxypyrazine (IBMP), to aid in harvest decisions, since concentrations decrease during berry ripening. In addition to IBMP, three other methoxypyrazines IPMP (2-isopropyl-3-methoxypyrazine), SBMP (2-sec-butyl-3-methoxypyrazine), and EMP (2-ethyl-3-methoxypyrazine) have been identified in grapes and/or wine and can impact aroma quality. Despite their routine analysis in the wine industry (mostly IBMP), accurate methoxypyrazine quantitation is hindered by two major challenges: sensitivity and resolution. With extremely low sensory detection thresholds (~8-15 ng L(-1) in wine for IBMP), highly sensitive analytical methods to quantify methoxypyrazines at trace levels are necessary. Here we were able to achieve resolution of IBMP as well as IPMP, EMP, and SBMP from co-eluting compounds using one-dimensional chromatography coupled to positive chemical ionization tandem mass spectrometry. Three extraction techniques HS-SPME (headspace-solid phase microextraction), SBSE (stirbar sorptive extraction), and HSSE (headspace sorptive extraction) were validated and compared. A 30 min extraction time was used for HS-SPME and SBSE extraction techniques, while 120 min was necessary to achieve sufficient sensitivity for HSSE extractions. All extraction methods have limits of quantitation (LOQ) at or below 1 ng L(-1) for all four methoxypyrazines analyzed, i.e., LOQ's at or below reported sensory detection limits in wine. The method is high throughput, with resolution of all compounds possible with a relatively rapid 27 min GC oven program. Copyright © 2015

  5. ARCHER{sub RT} – A GPU-based and photon-electron coupled Monte Carlo dose computing engine for radiation therapy: Software development and application to helical tomotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Su, Lin; Du, Xining; Liu, Tianyu; Ji, Wei; Xu, X. George, E-mail: xug2@rpi.edu [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Yang, Youming; Bednarz, Bryan [Medical Physics, University of Wisconsin, Madison, Wisconsin 53706 (United States); Sterpin, Edmond [Molecular Imaging, Radiotherapy and Oncology, Université catholique de Louvain, Brussels, Belgium 1348 (Belgium)

    2014-07-15

    Purpose: Using the graphical processing units (GPU) hardware technology, an extremely fast Monte Carlo (MC) code ARCHER{sub RT} is developed for radiation dose calculations in radiation therapy. This paper describes the detailed software development and testing for three clinical TomoTherapy® cases: the prostate, lung, and head and neck. Methods: To obtain clinically relevant dose distributions, phase space files (PSFs) created from optimized radiation therapy treatment plan fluence maps were used as the input to ARCHER{sub RT}. Patient-specific phantoms were constructed from patient CT images. Batch simulations were employed to facilitate the time-consuming task of loading large PSFs, and to improve the estimation of statistical uncertainty. Furthermore, two different Woodcock tracking algorithms were implemented and their relative performance was compared. The dose curves of an Elekta accelerator PSF incident on a homogeneous water phantom were benchmarked against DOSXYZnrc. For each of the treatment cases, dose volume histograms and isodose maps were produced from ARCHER{sub RT} and the general-purpose code, GEANT4. The gamma index analysis was performed to evaluate the similarity of voxel doses obtained from these two codes. The hardware accelerators used in this study are one NVIDIA K20 GPU, one NVIDIA K40 GPU, and six NVIDIA M2090 GPUs. In addition, to make a fairer comparison of the CPU and GPU performance, a multithreaded CPU code was developed using OpenMP and tested on an Intel E5-2620 CPU. Results: For the water phantom, the depth dose curve and dose profiles from ARCHER{sub RT} agree well with DOSXYZnrc. For clinical cases, results from ARCHER{sub RT} are compared with those from GEANT4 and good agreement is observed. Gamma index test is performed for voxels whose dose is greater than 10% of maximum dose. For 2%/2mm criteria, the passing rates for the prostate, lung case, and head and neck cases are 99.7%, 98.5%, and 97.2%, respectively. Due to

  6. Hypothesis testing of scientific Monte Carlo calculations

    Science.gov (United States)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  7. Status of Monte Carlo at Los Alamos

    International Nuclear Information System (INIS)

    Thompson, W.L.; Cashwell, E.D.; Godfrey, T.N.K.; Schrandt, R.G.; Deutsch, O.L.; Booth, T.E.

    1980-05-01

    Four papers were presented by Group X-6 on April 22, 1980, at the Oak Ridge Radiation Shielding Information Center (RSIC) Seminar-Workshop on Theory and Applications of Monte Carlo Methods. These papers are combined into one report for convenience and because they are related to each other. The first paper (by Thompson and Cashwell) is a general survey about X-6 and MCNP and is an introduction to the other three papers. It can also serve as a resume of X-6. The second paper (by Godfrey) explains some of the details of geometry specification in MCNP. The third paper (by Cashwell and Schrandt) illustrates calculating flux at a point with MCNP; in particular, the once-more-collided flux estimator is demonstrated. Finally, the fourth paper (by Thompson, Deutsch, and Booth) is a tutorial on some variance-reduction techniques. It should be required for a fledging Monte Carlo practitioner

  8. Contribution of the Acoustic Emission technique in the understanding and the modelling of the coupling between creep and damage in concrete

    International Nuclear Information System (INIS)

    Saliba, J.

    2012-01-01

    In order to design reliable concrete structures, prediction of long term behaviour of concrete is important. In fact, creep deformation can cause mechanical deterioration and cracking, stress redistribution, loss in prestressed members and rarely ruin the structure. The aim of this research is to have a better understanding of the interaction between creep and crack growth in concrete. An experimental investigation on the fracture properties of concrete beams submitted to creep bending tests with high levels of sustained load is reported. The influence of creep on residual capacity and fracture energy of concrete is studied. In parallel, the acoustic emission technique (AE) was used to monitor crack development. The results give wealth information on damage evolution and show a decrease in the width of the fracture process zone (FPZ) characterizing a more brittle behaviour for beams subjected to creep. The AE shows that this may be due to the development of microcracking detected under creep. Based on those experimental results, a mesoscopic numerical study was proposed by coupling a damage model based on the micro-plan theory and a viscoelastic creep model defined by several Kelvin-voigt chains. The numerical results on concrete specimens in tension and in bending confirm the development of microcracks during creep at the mortar-aggregate interface. (author)

  9. Investigation into the determination of trimethylarsine in natural gas and its partitioning into gas and condensate phases using (cryotrapping)/gas chromatography coupled to inductively coupled plasma mass spectrometry and liquid/solid sorption techniques

    International Nuclear Information System (INIS)

    Krupp, E.M.; Johnson, C.; Rechsteiner, C.; Moir, M.; Leong, D.; Feldmann, J.

    2007-01-01

    Speciation of trialkylated arsenic compunds in natural gas, pressurized and stable condensate samples from the same gas well was performed using (Cryotrapping) Gas Chromatography-Inductively Coupled Plasma Mass Spectrometry. The major species in all phases investigated was found to be trimethylarsine with a highest concentration of 17.8 ng/L (As) in the gas phase and 33.2 μg/L (As) in the stable condensate phase. The highest amount of trimethylarsine (121 μg/L (As)) was found in the pressurized condensate, along with trace amounts of non-identified higher alkylated arsines. Volatile arsenic species in natural gas and its related products cause concern with regards to environment, safety, occupational health and gas processing. Therefore, interest lies in a fast and simple field method for the determination of volatile arsenicals. Here, we use simple liquid and solid sorption techniques, namely absorption in silver nitrate solution and adsorption on silver nitrate impregnated silica gel tubes followed by total arsenic determination as a promising tool for field monitoring of volatile arsenicals in natural gas and gas condensates. Preliminary results obtained for the sorption-based methods show that around 70% of the arsenic is determined with these methods in comparison to volatile arsenic determination using GC-ICP-MS. Furthermore, an inter-laboratory- and inter-method comparison was performed using silver nitrate impregnated silica tubes on 14 different gas samples with concentrations varying from below 1 to 1000 μg As/m 3 natural gas. The results obtained from the two laboratories differ in a range of 10 to 60%, but agree within the order of magnitude, which is satisfactory for our purposes

  10. Present status and future prospects of neutronics Monte Carlo

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1990-01-01

    It is fair to say that the Monte Carlo method, over the last decade, has grown steadily more important as a neutronics computational tool. Apparently this has happened for assorted reasons. Thus, for example, as the power of computers has increased, the cost of the method has dropped, steadily becoming less and less of an obstacle to its use. In addition, more and more sophisticated input processors have now made it feasible to model extremely complicated systems routinely with really remarkable fidelity. Finally, as we demand greater and greater precision in reactor calculations, Monte Carlo is often found to be the only method accurate enough for use in benchmarking. Cross section uncertainties are now almost the only inherent limitations in our Monte Carlo capabilities. For this reason Monte Carlo has come to occupy a special position, interposed between experiment and other computational techniques. More and more often deterministic methods are tested by comparison with Monte Carlo, and cross sections are tested by comparing Monte Carlo with experiment. In this way one can distinguish very clearly between errors due to flaws in our numerical methods, and those due to deficiencies in cross section files. The special role of Monte Carlo as a benchmarking tool, often the only available benchmarking tool, makes it crucially important that this method should be polished to perfection. Problems relating to Eigenvalue calculations, variance reduction and the use of advanced computers are reviewed in this paper. (author)

  11. Effective gravitational coupling in modified teleparallel theories

    Science.gov (United States)

    Abedi, Habib; Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-04-01

    In the present study, we consider an extended form of teleparallel Lagrangian f (T ,ϕ ,X ) , as function of a scalar field ϕ , its kinetic term X and the torsion scalar T . We use linear perturbations to obtain the equation of matter density perturbations on sub-Hubble scales. The gravitational coupling is modified in scalar modes with respect to the one of general relativity, albeit vector modes decay and do not show any significant effects. We thus extend these results by involving multiple scalar field models. Further, we study conformal transformations in teleparallel gravity and we obtain the coupling as the scalar field is nonminimally coupled to both torsion and boundary terms. Finally, we propose the specific model f (T ,ϕ ,X )=T +∂μϕ ∂μϕ +ξ T ϕ2 . To check its goodness, we employ the observational Hubble data, constraining the coupling constant, ξ , through a Monte Carlo technique based on the Metropolis-Hastings algorithm. Hence, fixing ξ to its best-fit value got from our numerical analysis, we calculate the growth rate of matter perturbations and we compare our outcomes with the latest measurements and the predictions of the Λ CDM model.

  12. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki; Long, Quan; Scavino, Marco; Tempone, Raul

    2015-01-01

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  13. Bayesian Optimal Experimental Design Using Multilevel Monte Carlo

    KAUST Repository

    Ben Issaid, Chaouki

    2015-01-07

    Experimental design is very important since experiments are often resource-exhaustive and time-consuming. We carry out experimental design in the Bayesian framework. To measure the amount of information, which can be extracted from the data in an experiment, we use the expected information gain as the utility function, which specifically is the expected logarithmic ratio between the posterior and prior distributions. Optimizing this utility function enables us to design experiments that yield the most informative data for our purpose. One of the major difficulties in evaluating the expected information gain is that the integral is nested and can be high dimensional. We propose using Multilevel Monte Carlo techniques to accelerate the computation of the nested high dimensional integral. The advantages are twofold. First, the Multilevel Monte Carlo can significantly reduce the cost of the nested integral for a given tolerance, by using an optimal sample distribution among different sample averages of the inner integrals. Second, the Multilevel Monte Carlo method imposes less assumptions, such as the concentration of measures, required by Laplace method. We test our Multilevel Monte Carlo technique using a numerical example on the design of sensor deployment for a Darcy flow problem governed by one dimensional Laplace equation. We also compare the performance of the Multilevel Monte Carlo, Laplace approximation and direct double loop Monte Carlo.

  14. Monte Carlo variance reduction approaches for non-Boltzmann tallies

    International Nuclear Information System (INIS)

    Booth, T.E.

    1992-12-01

    Quantities that depend on the collective effects of groups of particles cannot be obtained from the standard Boltzmann transport equation. Monte Carlo estimates of these quantities are called non-Boltzmann tallies and have become increasingly important recently. Standard Monte Carlo variance reduction techniques were designed for tallies based on individual particles rather than groups of particles. Experience with non-Boltzmann tallies and analog Monte Carlo has demonstrated the severe limitations of analog Monte Carlo for many non-Boltzmann tallies. In fact, many calculations absolutely require variance reduction methods to achieve practical computation times. Three different approaches to variance reduction for non-Boltzmann tallies are described and shown to be unbiased. The advantages and disadvantages of each of the approaches are discussed

  15. Active neutron multiplicity analysis and Monte Carlo calculations

    International Nuclear Information System (INIS)

    Krick, M.S.; Ensslin, N.; Langner, D.G.; Miller, M.C.; Siebelist, R.; Stewart, J.E.; Ceo, R.N.; May, P.K.; Collins, L.L. Jr

    1994-01-01

    Active neutron multiplicity measurements of high-enrichment uranium metal and oxide samples have been made at Los Alamos and Y-12. The data from the measurements of standards at Los Alamos were analyzed to obtain values for neutron multiplication and source-sample coupling. These results are compared to equivalent results obtained from Monte Carlo calculations. An approximate relationship between coupling and multiplication is derived and used to correct doubles rates for multiplication and coupling. The utility of singles counting for uranium samples is also examined

  16. Concentration profiling of minerals in iliac crest bone tissue of opium addicted humans using inductively coupled plasma and discriminant analysis techniques.

    Science.gov (United States)

    Mani-Varnosfaderani, Ahmad; Jamshidi, Mahbobeh; Yeganeh, Ali; Mahmoudi, Mani

    2016-02-20

    Opium addiction is one of the main health problems in developing countries and induces serious defects on the human body. In this work, the concentrations of 32 minerals including alkaline, heavy and toxic metals have been determined in the iliac crest bone tissue of 22 opium addicted individuals using inductively coupled plasma-optical emission spectroscopy (ICP-OES). The bone tissues of 30 humans with no physiological and metabolomic diseases were used as the control group. For subsequent analyses, the linear and quadratic discriminant analysis techniques have been used for classification of the data into "addicted" and "non-addicted" groups. Moreover, the counter-propagation artificial neural network (CPANN) has been used for clustering of the data. The results revealed that the CPANN is a robust model and thoroughly classifies the data. The area under the curve for the receiver operating characteristic curve for this model was more than 0.91. Investigation of the results revealed that the opium consumption causes a deficiency in the level of Calcium, Phosphate, Potassium and Sodium in iliac crest bone tissue. Moreover, this type of addiction induces an increment in the level of toxic and heavy metals such as Co, Cr, Mo and Ni in iliac crest tissue. The correlation analysis revealed that there were no significant dependencies between the age of the samples and the mineral content of their iliac crest, in this study. The results of this work suggest that the opium addicted individuals need thorough and restricted dietary and medical care programs after recovery phases, in order to have healthy bones. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. From the coupling between ion beam analysis techniques and physico-chemical characterization methods to the study of irradiation effects on materials behaviour

    International Nuclear Information System (INIS)

    Millard-Pinard, N.

    2003-01-01

    The general purpose of my research work is to follow and to interpret the surface evolution of materials, which have received several treatments. During my PhD and my post-doc work, my field of research was tribology. Since I arrived in the 'Aval du Cycle Electronucleaire' group of the Institut de Physique Nucleaire de Lyon, my research activities are in line with the CNRS program 'PACE ' (Programme sur l'Aval du Cycle Electronucleaire) within the ACTINET network. They are coordinated by the PARIS (Physico-chimie des actinides et autres radioelements en solution et aux interfaces) and NOMADE (NOuveaux MAteriaux pour les DEchets) GDR with ANDRA (Agence Nationale pour la gestion des Dechets RAdioactifs), EDF and IRSN (Institut de Radioprotection et de Surete Nucleaire) as partner organisations. My work focused on the study of fission products and actinides migration in barrier materials, which may be capable of assuring the long term safety of deep geological repositories. Until now, it was necessary to use the coupling of ion beam analysis techniques and physico-chemical characterization techniques. During the last few months, I have became interested in understanding radiolytic effects. This new orientation has led us to use ion beams as an irradiating tool. These irradiation experiments are pursued in three major projects. The study of cobalt sulfide inhibition effects of radiolysis gas production during the irradiation of model organic molecules. This is a collaboration with the IRSN, the Institut de Recherche sur la Catalyse and the Ecole Nationale Superieure des Mines de Saint-Etienne. A PhD, co-directed by M. Pijolat from ENSMSE and myself, concerning this study will start in October 2003. Water radiolysis effects on iron corrosion are also studied in the particular case of vitrified nuclear waste containers, which will be stored in deep geological repositories. One ANDRA financed PhD, co-directed by Nathalie Moncoffre and myself, is dedicated to this study

  18. On the estimate of the transpiration in Mediterranean heterogeneous ecosystems with the coupled use of eddy covariance and sap flow techniques.

    Science.gov (United States)

    Corona, Roberto; Curreli, Matteo; Montaldo, Nicola; Oren, Ram

    2013-04-01

    Mediterranean ecosystems are commonly heterogeneous savanna-like ecosystems, with contrasting plant functional types (PFT) competing for the water use. Mediterranean regions suffer water scarcity due to the dry climate conditions. In semi-arid regions evapotranspiration (ET) is the leading loss term of the root-zone water budget with a yearly magnitude that may be roughly equal to the precipitation. Despite the attention these ecosystems are receiving, a general lack of knowledge persists about the estimate of ET and the relationship between ET and the plant survival strategies for the different PFTs under water stress. During the dry summers these water-limited heterogeneous ecosystems are mainly characterized by a simple dual PFT-landscapes with strong-resistant woody vegetation and bare soil since grass died. In these conditions due to the low signal of the land surface fluxes captured by the sonic anemometer and gas analyzer the widely used eddy covariance may fail and its ET estimate is not robust enough. In these conditions the use of the sap flow technique may have a key role, because theoretically it provides a direct estimate of the woody vegetation transpiration. Through the coupled use of the sap flow sensor observations, a 2D foot print model of the eddy covariance tower and high resolution satellite images for the estimate of the foot print land cover map, the eddy covariance measurements can be correctly interpreted, and ET components (bare soil evaporation and woody vegetation transpiration) can be separated. The case study is at the Orroli site in Sardinia (Italy). The site landscape is a mixture of Mediterranean patchy vegetation types: trees, including wild olives and cork oaks, different shrubs and herbaceous species. An extensive field campaign started in 2004. Land-surface fluxes and CO2 fluxes are estimated by an eddy covariance technique based micrometeorological tower. Soil moisture profiles were also continuously estimated using water

  19. Monte Carlo simulation and experimental verification of radiotherapy electron beams

    International Nuclear Information System (INIS)

    Griffin, J.; Deloar, H. M.

    2007-01-01

    Full text: Based on fundamental physics and statistics, the Monte Carlo technique is generally accepted as the accurate method for modelling radiation therapy treatments. A Monte Carlo simulation system has been installed, and models of linear accelerators in the more commonly used electron beam modes have been built and commissioned. A novel technique for radiation dosimetry is also being investigated. Combining the advantages of both water tank and solid phantom dosimetry, a hollow, thin walled shell or mask is filled with water and then raised above the natural water surface to produce a volume of water with the desired irregular shape.

  20. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati