WorldWideScience

Sample records for carlo level densities

  1. Combinatorial nuclear level density by a Monte Carlo method

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations

  2. Microscopic calculation of level densities: the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Alhassid, Yoram

    2012-01-01

    The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments

  3. Kohn-Sham orbitals and potentials from quantum Monte Carlo molecular densities

    International Nuclear Information System (INIS)

    Varsano, Daniele; Barborini, Matteo; Guidoni, Leonardo

    2014-01-01

    In this work we show the possibility to extract Kohn-Sham orbitals, orbital energies, and exchange correlation potentials from accurate Quantum Monte Carlo (QMC) densities for atoms (He, Be, Ne) and molecules (H 2 , Be 2 , H 2 O, and C 2 H 4 ). The Variational Monte Carlo (VMC) densities based on accurate Jastrow Antisymmetrised Geminal Power wave functions are calculated through different estimators. Using these reference densities, we extract the Kohn-Sham quantities with the method developed by Zhao, Morrison, and Parr (ZMP) [Phys. Rev. A 50, 2138 (1994)]. We compare these extracted quantities with those obtained form CISD densities and with other data reported in the literature, finding a good agreement between VMC and other high-level quantum chemistry methods. Our results demonstrate the applicability of the ZMP procedure to QMC molecular densities, that can be used for the testing and development of improved functionals and for the implementation of embedding schemes based on QMC and Density Functional Theory

  4. Probability Density Estimation Using Neural Networks in Monte Carlo Calculations

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Cho, Jin Young; Song, Jae Seung; Kim, Chang Hyo

    2008-01-01

    The Monte Carlo neutronics analysis requires the capability for a tally distribution estimation like an axial power distribution or a flux gradient in a fuel rod, etc. This problem can be regarded as a probability density function estimation from an observation set. We apply the neural network based density estimation method to an observation and sampling weight set produced by the Monte Carlo calculations. The neural network method is compared with the histogram and the functional expansion tally method for estimating a non-smooth density, a fission source distribution, and an absorption rate's gradient in a burnable absorber rod. The application results shows that the neural network method can approximate a tally distribution quite well. (authors)

  5. Perturbation based Monte Carlo criticality search in density, enrichment and concentration

    International Nuclear Information System (INIS)

    Li, Zeguang; Wang, Kan; Deng, Jingkang

    2015-01-01

    Highlights: • A new perturbation based Monte Carlo criticality search method is proposed. • The method could get accurate results with only one individual criticality run. • The method is used to solve density, enrichment and concentration search problems. • Results show the feasibility and good performances of this method. • The relationship between results’ accuracy and perturbation order is discussed. - Abstract: Criticality search is a very important aspect in reactor physics analysis. Due to the advantages of Monte Carlo method and the development of computer technologies, Monte Carlo criticality search is becoming more and more necessary and feasible. Existing Monte Carlo criticality search methods need large amount of individual criticality runs and may have unstable results because of the uncertainties of criticality results. In this paper, a new perturbation based Monte Carlo criticality search method is proposed and discussed. This method only needs one individual criticality calculation with perturbation tallies to estimate k eff changing function using initial k eff and differential coefficients results, and solves polynomial equations to get the criticality search results. The new perturbation based Monte Carlo criticality search method is implemented in the Monte Carlo code RMC, and criticality search problems in density, enrichment and concentration are taken out. Results show that this method is quite inspiring in accuracy and efficiency, and has advantages compared with other criticality search methods

  6. Realistic microscopic level densities for spherical nuclei

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    Nuclear level densities play an important role in nuclear reactions such as the formation of the compound nucleus. We develop a microscopic calculation of the level density based on a combinatorial evaluation from a realistic single-particle level scheme. This calculation makes use of a fast Monte Carlo algorithm allowing us to consider large shell model spaces which could not be treated previously in combinatorial approaches. Since our model relies on a microscopic basis, it can be applied to exotic nuclei with more confidence than the commonly used semiphenomenological formuals. An exhaustive comparison of our predicted neutron s-wave resonance spacings with experimental data for a wide range of nuclei is presented

  7. Monte Carlo neutral density calculations for ELMO Bumpy Torus

    International Nuclear Information System (INIS)

    Davis, W.A.; Colchin, R.J.

    1986-11-01

    The steady-state nature of the ELMO Bumpy Torus (EBT) plasma implies that the neutral density at any point inside the plasma volume will determine the local particle confinement time. This paper describes a Monte Carlo calculation of three-dimensional atomic and molecular neutral density profiles in EBT. The calculation has been done using various models for neutral source points, for launching schemes, for plasma profiles, and for plasma densities and temperatures. Calculated results are compared with experimental observations - principally spectroscopic measurements - both for guidance in normalization and for overall consistency checks. Implications of the predicted neutral profiles for the fast-ion-decay measurement of neutral densities are also addressed

  8. Effect of error propagation of nuclide number densities on Monte Carlo burn-up calculations

    International Nuclear Information System (INIS)

    Tohjoh, Masayuki; Endo, Tomohiro; Watanabe, Masato; Yamamoto, Akio

    2006-01-01

    As a result of improvements in computer technology, the continuous energy Monte Carlo burn-up calculation has received attention as a good candidate for an assembly calculation method. However, the results of Monte Carlo calculations contain the statistical errors. The results of Monte Carlo burn-up calculations, in particular, include propagated statistical errors through the variance of the nuclide number densities. Therefore, if statistical error alone is evaluated, the errors in Monte Carlo burn-up calculations may be underestimated. To make clear this effect of error propagation on Monte Carlo burn-up calculations, we here proposed an equation that can predict the variance of nuclide number densities after burn-up calculations, and we verified this equation using enormous numbers of the Monte Carlo burn-up calculations by changing only the initial random numbers. We also verified the effect of the number of burn-up calculation points on Monte Carlo burn-up calculations. From these verifications, we estimated the errors in Monte Carlo burn-up calculations including both statistical and propagated errors. Finally, we made clear the effects of error propagation on Monte Carlo burn-up calculations by comparing statistical errors alone versus both statistical and propagated errors. The results revealed that the effects of error propagation on the Monte Carlo burn-up calculations of 8 x 8 BWR fuel assembly are low up to 60 GWd/t

  9. Numerical simulation of logging-while-drilling density image by Monte-Carlo method

    International Nuclear Information System (INIS)

    Yue Aizhong; He Biao; Zhang Jianmin; Wang Lijuan

    2010-01-01

    Logging-while-drilling system is researched by Monte Carlo Method. Model of Logging-while-drilling system is built, tool response and azimuth density image are acquired, methods dealing with azimuth density data is discussed. This outcome lay foundation for optimizing tool, developing new tool and logging explanation. (authors)

  10. Frequency domain Monte Carlo simulation method for cross power spectral density driven by periodically pulsed spallation neutron source using complex-valued weight Monte Carlo

    International Nuclear Information System (INIS)

    Yamamoto, Toshihiro

    2014-01-01

    Highlights: • The cross power spectral density in ADS has correlated and uncorrelated components. • A frequency domain Monte Carlo method to calculate the uncorrelated one is developed. • The method solves the Fourier transformed transport equation. • The method uses complex-valued weights to solve the equation. • The new method reproduces well the CPSDs calculated with time domain MC method. - Abstract: In an accelerator driven system (ADS), pulsed spallation neutrons are injected at a constant frequency. The cross power spectral density (CPSD), which can be used for monitoring the subcriticality of the ADS, is composed of the correlated and uncorrelated components. The uncorrelated component is described by a series of the Dirac delta functions that occur at the integer multiples of the pulse repetition frequency. In the present paper, a Monte Carlo method to solve the Fourier transformed neutron transport equation with a periodically pulsed neutron source term has been developed to obtain the CPSD in ADSs. Since the Fourier transformed flux is a complex-valued quantity, the Monte Carlo method introduces complex-valued weights to solve the Fourier transformed equation. The Monte Carlo algorithm used in this paper is similar to the one that was developed by the author of this paper to calculate the neutron noise caused by cross section perturbations. The newly-developed Monte Carlo algorithm is benchmarked to the conventional time domain Monte Carlo simulation technique. The CPSDs are obtained both with the newly-developed frequency domain Monte Carlo method and the conventional time domain Monte Carlo method for a one-dimensional infinite slab. The CPSDs obtained with the frequency domain Monte Carlo method agree well with those with the time domain method. The higher order mode effects on the CPSD in an ADS with a periodically pulsed neutron source are discussed

  11. Unbiased reduced density matrices and electronic properties from full configuration interaction quantum Monte Carlo

    International Nuclear Information System (INIS)

    Overy, Catherine; Blunt, N. S.; Shepherd, James J.; Booth, George H.; Cleland, Deidre; Alavi, Ali

    2014-01-01

    Properties that are necessarily formulated within pure (symmetric) expectation values are difficult to calculate for projector quantum Monte Carlo approaches, but are critical in order to compute many of the important observable properties of electronic systems. Here, we investigate an approach for the sampling of unbiased reduced density matrices within the full configuration interaction quantum Monte Carlo dynamic, which requires only small computational overheads. This is achieved via an independent replica population of walkers in the dynamic, sampled alongside the original population. The resulting reduced density matrices are free from systematic error (beyond those present via constraints on the dynamic itself) and can be used to compute a variety of expectation values and properties, with rapid convergence to an exact limit. A quasi-variational energy estimate derived from these density matrices is proposed as an accurate alternative to the projected estimator for multiconfigurational wavefunctions, while its variational property could potentially lend itself to accurate extrapolation approaches in larger systems

  12. Level densities

    International Nuclear Information System (INIS)

    Ignatyuk, A.V.

    1998-01-01

    For any applications of the statistical theory of nuclear reactions it is very important to obtain the parameters of the level density description from the reliable experimental data. The cumulative numbers of low-lying levels and the average spacings between neutron resonances are usually used as such data. The level density parameters fitted to such data are compiled in the RIPL Starter File for the tree models most frequently used in practical calculations: i) For the Gilber-Cameron model the parameters of the Beijing group, based on a rather recent compilations of the neutron resonance and low-lying level densities and included into the beijing-gc.dat file, are chosen as recommended. As alternative versions the parameters provided by other groups are given into the files: jaeri-gc.dat, bombay-gc.dat, obninsk-gc.dat. Additionally the iljinov-gc.dat, and mengoni-gc.dat files include sets of the level density parameters that take into account the damping of shell effects at high energies. ii) For the backed-shifted Fermi gas model the beijing-bs.dat file is selected as the recommended one. Alternative parameters of the Obninsk group are given in the obninsk-bs.dat file and those of Bombay in bombay-bs.dat. iii) For the generalized superfluid model the Obninsk group parameters included into the obninsk-bcs.dat file are chosen as recommended ones and the beijing-bcs.dat file is included as an alternative set of parameters. iv) For the microscopic approach to the level densities the files are: obninsk-micro.for -FORTRAN 77 source for the microscopical statistical level density code developed in Obninsk by Ignatyuk and coworkers, moller-levels.gz - Moeller single-particle level and ground state deformation data base, moller-levels.for -retrieval code for Moeller single-particle level scheme. (author)

  13. Spin Density Distribution in Open-Shell Transition Metal Systems: A Comparative Post-Hartree-Fock, Density Functional Theory, and Quantum Monte Carlo Study of the CuCl2 Molecule.

    Science.gov (United States)

    Caffarel, Michel; Giner, Emmanuel; Scemama, Anthony; Ramírez-Solís, Alejandro

    2014-12-09

    We present a comparative study of the spatial distribution of the spin density of the ground state of CuCl2 using Density Functional Theory (DFT), quantum Monte Carlo (QMC), and post-Hartree-Fock wave function theory (WFT). A number of studies have shown that an accurate description of the electronic structure of the lowest-lying states of this molecule is particularly challenging due to the interplay between the strong dynamical correlation effects in the 3d shell and the delocalization of the 3d hole over the chlorine atoms. More generally, this problem is representative of the difficulties encountered when studying open-shell metal-containing molecular systems. Here, it is shown that qualitatively different results for the spin density distribution are obtained from the various quantum-mechanical approaches. At the DFT level, the spin density distribution is found to be very dependent on the functional employed. At the QMC level, Fixed-Node Diffusion Monte Carlo (FN-DMC) results are strongly dependent on the nodal structure of the trial wave function. Regarding wave function methods, most approaches not including a very high amount of dynamic correlation effects lead to a much too high localization of the spin density on the copper atom, in sharp contrast with DFT. To shed some light on these conflicting results Full CI-type (FCI) calculations using the 6-31G basis set and based on a selection process of the most important determinants, the so-called CIPSI approach (Configuration Interaction with Perturbative Selection done Iteratively) are performed. Quite remarkably, it is found that for this 63-electron molecule and a full CI space including about 10(18) determinants, the FCI limit can almost be reached. Putting all results together, a natural and coherent picture for the spin distribution is proposed.

  14. Fission level densities

    International Nuclear Information System (INIS)

    Maslov, V.M.

    1998-01-01

    Fission level densities (or fissioning nucleus level densities at fission saddle deformations) are required for statistical model calculations of actinide fission cross sections. Back-shifted Fermi-Gas Model, Constant Temperature Model and Generalized Superfluid Model (GSM) are widely used for the description of level densities at stable deformations. These models provide approximately identical level density description at excitations close to the neutron binding energy. It is at low excitation energies that they are discrepant, while this energy region is crucial for fission cross section calculations. A drawback of back-shifted Fermi gas model and traditional constant temperature model approaches is that it is difficult to include in a consistent way pair correlations, collective effects and shell effects. Pair, shell and collective properties of nucleus do not reduce just to the renormalization of level density parameter a, but influence the energy dependence of level densities. These effects turn out to be important because they seem to depend upon deformation of either equilibrium or saddle-point. These effects are easily introduced within GSM approach. Fission barriers are another key ingredients involved in the fission cross section calculations. Fission level density and barrier parameters are strongly interdependent. This is the reason for including fission barrier parameters along with the fission level densities in the Starter File. The recommended file is maslov.dat - fission barrier parameters. Recent version of actinide fission barrier data obtained in Obninsk (obninsk.dat) should only be considered as a guide for selection of initial parameters. These data are included in the Starter File, together with the fission barrier parameters recommended by CNDC (beijing.dat), for completeness. (author)

  15. Restricted primitive model for electrical double layers: modified HNC theory of density profiles and Monte Carlo study of differential capacitance

    International Nuclear Information System (INIS)

    Ballone, P.; Pastore, G.; Tosi, M.P.

    1986-02-01

    Interfacial properties of an ionic fluid next to a uniformly charged planar wall are studied in the restricted primitive model by both theoretical and Monte Carlo methods. The system is a 1:1 fluid of equisized charged hard spheres in a state appropriate to 1M aqueous electrolyte solutions. The interfacial density profiles of counterions and coions are evaluated by extending the hypernetted chain approximation (HNC) to include the leading bridge diagrams for the wall-ion correlations. The theoretical results compare well with those of grand canonical Monte Carlo computations of Torrie and Valleau over the whole range of surface charge density considered by these authors, thus resolving the earlier disagreement between statistical mechanical theories and simulation data at large charge densities. In view of the importance of the model as a testing ground for theories of the diffuse layer, the Monte Carlo calculations are tested by considering alternative choices for the basic simulation cell and are extended so as to allow an evaluation of the differential capacitance of the model interface by two independent methods. These involve numerical differentiation of the mean potential drop as a function of the surface charge density or alternatively an appropriate use of a fluctuation theory formula for the capacitance. The results of these two Monte Carlo approaches consistently indicate an initially smooth increase of the diffuse layer capacitance followed by structure at large charge densities, this behaviour being connected with layering of counterions as already revealed in the density profiles reported by Torrie and Valleau. (author)

  16. Evaluation of high packing density powder X-ray screens by Monte Carlo methods

    International Nuclear Information System (INIS)

    Liaparinos, P.; Kandarakis, I.; Cavouras, D.; Kalivas, N.; Delis, H.; Panayiotakis, G.

    2007-01-01

    Phosphor materials are employed in intensifying screens of both digital and conventional X-ray imaging detectors. High packing density powder screens have been developed (e.g. screens in ceramic form) exhibiting high-resolution and light emission properties, and thus contributing to improved image transfer characteristics and higher radiation to light conversion efficiency. For the present study, a custom Monte Carlo simulation program was used in order to examine the performance of ceramic powder screens, under various radiographic conditions. The model was developed using Mie scattering theory for the description of light interactions, based on the physical characteristics (e.g. complex refractive index, light wavelength) of the phosphor material. Monte Carlo simulations were carried out assuming: (a) X-ray photon energy ranging from 18 up to 49 keV, (b) Gd 2 O 2 S:Tb phosphor material with packing density of 70% and grain size of 7 μm and (c) phosphor thickness ranging between 30 and 70 mg/cm 2 . The variation of the Modulation Transfer Function (MTF) and the Luminescence Efficiency (LE) with respect to the X-ray energy and the phosphor thickness was evaluated. Both aforementioned imaging characteristics were shown to take high values at 49 keV X-ray energy and 70 mg/cm 2 phosphor thickness. It was found that high packing density screens may be appropriate for use in medical radiographic systems

  17. Alpha particle density and energy distributions in tandem mirrors using Monte-Carlo techniques

    International Nuclear Information System (INIS)

    Kerns, J.A.

    1986-05-01

    We have simulated the alpha thermalization process using a Monte-Carlo technique, in which the alpha guiding center is followed between simulated collisions and Spitzer's collision model is used for the alpha-plasma interaction. Monte-Carlo techniques are used to determine the alpha radial birth position, the alpha particle position at a collision, and the angle scatter and dispersion at a collision. The plasma is modeled as a hot reacting core, surrounded by a cold halo plasma (T approx.50 eV). Alpha orbits that intersect the halo lose 90% of their energy to the halo electrons because of the halo drag, which is ten times greater than the drag in the core. The uneven drag across the alpha orbit also produces an outward, radial, guiding center drift. This drag drift is dependent on the plasma density and temperature radial profiles. We have modeled these profiles and have specifically studied a single-scale-length model, in which the density scale length (r/sub pD/) equals the temperature scale length (r/sub pT/), and a two-scale-length model, in which r/sub pD//r/sub pT/ = 1.1

  18. Nuclear Level Densities

    International Nuclear Information System (INIS)

    Grimes, S.M.

    2005-01-01

    Recent research in the area of nuclear level densities is reviewed. The current interest in nuclear astrophysics and in structure of nuclei off of the line of stability has led to the development of radioactive beam facilities with larger machines currently being planned. Nuclear level densities for the systems used to produce the radioactive beams influence substantially the production rates of these beams. The modification of level-density parameters near the drip lines would also affect nucleosynthesis rates and abundances

  19. Structure of cylindrical electric double layers: Comparison of density functional and modified Poisson-Boltzmann theories with Monte Carlo simulations

    Directory of Open Access Journals (Sweden)

    V.Dorvilien

    2013-01-01

    Full Text Available The structure of cylindrical double layers is studied using a modified Poisson Boltzmann theory and the density functional approach. In the model double layer the electrode is a cylindrical polyion that is infinitely long, impenetrable, and uniformly charged. The polyion is immersed in a sea of equi-sized rigid ions embedded in a dielectric continuum. An in-depth comparison of the theoretically predicted zeta potentials, the mean electrostatic potentials, and the electrode-ion singlet density distributions is made with the corresponding Monte Carlo simulation data. The theories are seen to be consistent in their predictions that include variations in ionic diameters, electrolyte concentrations, and electrode surface charge densities, and are also able to reproduce well some new and existing Monte Carlo results.

  20. The neutrons flux density calculations by Monte Carlo code for the double heterogeneity fuel

    International Nuclear Information System (INIS)

    Gurevich, M.I.; Brizgalov, V.I.

    1994-01-01

    This document provides the calculation technique for the fuel elements which consists of the one substance as a matrix and the other substance as the corn embedded in it. This technique can be used in the neutron flux density calculation by the universal Monte Carlo code. The estimation of accuracy is presented too. (authors). 6 refs., 1 fig

  1. Review of methods for level density estimation from resonance parameters

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1983-01-01

    A number of methods are available for statistical analysis of resonance parameter sets, i.e. for estimation of level densities and average widths with account of missing levels. The main categories are (i) methods based on theories of level spacings (orthogonal-ensemble theory, Dyson-Mehta statistics), (ii) methods based on comparison with simulated cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (iii) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The present review will concentrate on (iii) with the aim of clarifying the basic mathematical concepts and the relationship between the various techniques. Recent theoretical progress in the treatment of resolution effects, detectability thresholds and p-wave admixture is described. (Auth.)

  2. Simulation of density curve for slim borehole using the Monte Carlo code MCNPX

    International Nuclear Information System (INIS)

    Souza, Edmilson Monteiro de; Silva, Ademir Xavier da; Lopes, Ricardo Tadeu; Lima, Inaya C.B.; Rocha, Paula L.F.

    2010-01-01

    Borehole logging for formation density has been an important geophysical measurement in oil industry. For calibration of the Gamma Ray nuclear logging tool, numerous rock models of different lithology and densities are necessary. However, the full success of this calibration process is determined by a reliable benchmark, where the complete and precise chemical composition of the standards is necessary. Simulations using the Monte Carlo MCNP have been widely employed in well logging application once it serves as a low-cost substitute for experimental test pits, as well as a means for obtaining data that are difficult to obtain experimentally. Considering this, the purpose of this work is to use the code MCNP to obtain density curves for slim boreholes using Gamma Ray logging tools. For this, a Slim Density Gamma Probe, named TRISOND R , and a 100 mCi Cs-137 gamma source has been modeled with the new version of MCNP code MCNPX. (author)

  3. Simulation of density curve for slim borehole using the Monte Carlo code MCNPX

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Edmilson Monteiro de; Silva, Ademir Xavier da; Lopes, Ricardo Tadeu, E-mail: emonteiro@nuclear.ufrj.b, E-mail: ademir@nuclear.ufrj.b, E-mail: ricardo@lin.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Correa, Samanda Cristine Arruda, E-mail: scorrea@nuclear.ufrj.b [Centro Universitario Estadual da Zona Oeste (CCMAT/UEZO), Rio de Janeiro, RJ (Brazil); Lima, Inaya C.B., E-mail: inaya@lin.ufrj.b [Universidade Estadual do Rio de Janeiro (IPRJ/UERJ) Nova Friburgo, Rio de Janeiro, RJ (Brazil). Instituto Politecnico do Rio de Janeiro; Rocha, Paula L.F., E-mail: ferrucio@acd.ufrj.b [Universidade Federal do Rio de Janeiro (UFRJ) RJ (Brazil). Dept. de Geologia

    2010-07-01

    Borehole logging for formation density has been an important geophysical measurement in oil industry. For calibration of the Gamma Ray nuclear logging tool, numerous rock models of different lithology and densities are necessary. However, the full success of this calibration process is determined by a reliable benchmark, where the complete and precise chemical composition of the standards is necessary. Simulations using the Monte Carlo MCNP have been widely employed in well logging application once it serves as a low-cost substitute for experimental test pits, as well as a means for obtaining data that are difficult to obtain experimentally. Considering this, the purpose of this work is to use the code MCNP to obtain density curves for slim boreholes using Gamma Ray logging tools. For this, a Slim Density Gamma Probe, named TRISOND{sup R}, and a 100 mCi Cs-137 gamma source has been modeled with the new version of MCNP code MCNPX. (author)

  4. Estimation of the four-wave mixing noise probability-density function by the multicanonical Monte Carlo method.

    Science.gov (United States)

    Neokosmidis, Ioannis; Kamalakis, Thomas; Chipouras, Aristides; Sphicopoulos, Thomas

    2005-01-01

    The performance of high-powered wavelength-division multiplexed (WDM) optical networks can be severely degraded by four-wave-mixing- (FWM-) induced distortion. The multicanonical Monte Carlo method (MCMC) is used to calculate the probability-density function (PDF) of the decision variable of a receiver, limited by FWM noise. Compared with the conventional Monte Carlo method previously used to estimate this PDF, the MCMC method is much faster and can accurately estimate smaller error probabilities. The method takes into account the correlation between the components of the FWM noise, unlike the Gaussian model, which is shown not to provide accurate results.

  5. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  6. Monte Carlo simulation on influence of cutting beds on density log in the horizontal wells

    International Nuclear Information System (INIS)

    Yu Huawei; Sun Jianmeng; Li Xiaopeng; Zhang Qiongyao; Li Na

    2009-01-01

    Cuttings debris may be deposited in the bottom of a horizontal hole because of gravity of the earth. It has lower density than the formation, so cuttings beds can have a significant effect on the density log responses. Monte Carlo method was used to study the effect of cuttings bed on the computed log density, with various formation densities and cuttings bed thicknesses of 0-5 cm in the horizontal borehole.In order to guarantee the reliability of results, simulation results were compared to the experimental data. MC simulations show that the density measurement can be corrected using the ridge-rib graph, if the thickness of the cutting bed is less than 2 cm. Otherwise, it is difficulty to correct the effect of the cutting bed on the formation density measurement. Finally, the cuttings beds correction charts and formulas for DSDL-8723 double-detector density logging tool which is in the horizontal borehole are made preliminarily. (authors)

  7. Supersonic flow with shock waves. Monte-Carlo calculations for low density plasma. I

    International Nuclear Information System (INIS)

    Almenara, E.; Hidalgo, M.; Saviron, J. M.

    1980-01-01

    This Report gives preliminary information about a Monte Carlo procedure to simulate supersonic flow past a body of a low density plasma in the transition regime. A computer program has been written for a UNIVAC 1108 machine to account for a plasma composed by neutral molecules and positive and negative ions. Different and rather general body geometries can be analyzed. Special attention is played to tho detached shock waves growth In front of the body. (Author) 30 refs

  8. Level density of 57Co

    International Nuclear Information System (INIS)

    Mishra, V.; Boukharouba, N.; Brient, C.E.; Grimes, S.M.; Pedroni, R.S.

    1994-01-01

    Levels in 57 Co have been studied in the region of resolved levels (E 57 Fe(p,n) 57 Co neutron spectrum with resolution ΔE∼5 keV. Seventeen previously unknown levels are located. Level density parameters in the continuum region are deduced from thick target measurements of the same reaction and additional level density information is deduced from Ericson fluctuation studies of the reaction 56 Fe(p,n) 56 Co. A set of level density parameters is found which describes the level density of 57 Co at energies up to 14 MeV. Efforts to obtain level density information from the 56 Fe(d,n) 57 Co reaction were unsuccessful, but estimates of the fraction of the deuteron absorption cross section corresponding to compound nucleus formation are obtained

  9. Monte Carlo studies of diamagnetism and charge density wave order in the cuprate pseudogap regime

    Science.gov (United States)

    Hayward Sierens, Lauren; Achkar, Andrew; Hawthorn, David; Melko, Roger; Sachdev, Subir

    2015-03-01

    The pseudogap regime of the hole-doped cuprate superconductors is often characterized experimentally in terms of a substantial diamagnetic response and, from another point of view, in terms of strong charge density wave (CDW) order. We introduce a dimensionless ratio, R, that incorporates both diamagnetic susceptibility and the correlation length of CDW order, and therefore reconciles these two fundamental characteristics of the pseudogap. We perform Monte Carlo simulations on a classical model that considers angular fluctuations of a six-dimensional order parameter, and compare our Monte Carlo results for R with existing data from torque magnetometry and x-ray scattering experiments on YBa2Cu3O6+x. We achieve qualitative agreement, and also propose future experiments to further investigate the behaviour of this dimensionless ratio.

  10. Statistical inference of level densities from resolved resonance parameters

    International Nuclear Information System (INIS)

    Froehner, F.H.

    1983-08-01

    Level densities are most directly obtained by counting the resonances observed in the resolved resonance range. Even in the measurements, however, weak levels are invariably missed so that one has to estimate their number and add it to the raw count. The main categories of missinglevel estimators are discussed in the present review, viz. (I) ladder methods including those based on the theory of Hamiltonian matrix ensembles (Dyson-Mehta statistics), (II) methods based on comparison with artificial cross section curves (Monte Carlo simulation, Garrison's autocorrelation method), (III) methods exploiting the observed neutron width distribution by means of Bayesian or more approximate procedures such as maximum-likelihood, least-squares or moment methods, with various recipes for the treatment of detection thresholds and resolution effects. The language of mathematical statistics is employed to clarify the basis of, and the relationship between, the various techniques. Recent progress in the treatment of resolution effects, detection thresholds and p-wave admixture is described. (orig.) [de

  11. Nuclear level density

    International Nuclear Information System (INIS)

    Cardoso Junior, J.L.

    1982-10-01

    Experimental data show that the number of nuclear states increases rapidly with increasing excitation energy. The properties of highly excited nuclei are important for many nuclear reactions, mainly those that go via processes of the compound nucleus type. In this case, it is sufficient to know the statistical properties of the nuclear levels. First of them is the function of nuclear levels density. Several theoretical models which describe the level density are presented. The statistical mechanics and a quantum mechanics formalisms as well as semi-empirical results are analysed and discussed. (Author) [pt

  12. Elements of Monte Carlo techniques

    International Nuclear Information System (INIS)

    Nagarajan, P.S.

    2000-01-01

    The Monte Carlo method is essentially mimicking the real world physical processes at the microscopic level. With the incredible increase in computing speeds and ever decreasing computing costs, there is widespread use of the method for practical problems. The method is used in calculating algorithm-generated sequences known as pseudo random sequence (prs)., probability density function (pdf), test for randomness, extension to multidimensional integration etc

  13. Experimental level densities of atomic nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Guttormsen, M.; Bello Garrote, F.L.; Eriksen, T.K.; Giacoppo, F.; Goergen, A.; Hagen, T.W.; Klintefjord, M.; Larsen, A.C.; Nyhus, H.T.; Renstroem, T.; Rose, S.J.; Sahin, E.; Siem, S.; Tornyi, T.G.; Tveten, G.M. [University of Oslo, Department of Physics, Oslo (Norway); Aiche, M.; Ducasse, Q.; Jurado, B. [University of Bordeaux, CENBG, CNRS/IN2P3, B.P. 120, Gradignan (France); Bernstein, L.A.; Bleuel, D.L. [Lawrence Livermore National Laboratory, Livermore, CA (United States); Byun, Y.; Voinov, A. [Ohio University, Department of Physics and Astronomy, Athens, Ohio (United States); Gunsing, F. [CEA Saclay, DSM/Irfu/SPhN, Cedex (France); Lebois, L.; Leniau, B.; Wilson, J. [Institut de Physique Nucleaire d' Orsay, Orsay Cedex (France); Wiedeking, M. [iThemba LABS, P.O. Box 722, Somerset West (South Africa)

    2015-12-15

    It is almost 80 years since Hans Bethe described the level density as a non-interacting gas of protons and neutrons. In all these years, experimental data were interpreted within this picture of a fermionic gas. However, the renewed interest of measuring level density using various techniques calls for a revision of this description. In particular, the wealth of nuclear level densities measured with the Oslo method favors the constant-temperature level density over the Fermi-gas picture. From the basis of experimental data, we demonstrate that nuclei exhibit a constant-temperature level density behavior for all mass regions and at least up to the neutron threshold. (orig.)

  14. Quantum Monte Carlo studies of a metallic spin-density wave transition

    Energy Technology Data Exchange (ETDEWEB)

    Gerlach, Max Henner

    2017-01-20

    Plenty experimental evidence indicates that quantum critical phenomena give rise to much of the rich physics observed in strongly correlated itinerant electron systems such as the high temperature superconductors. A quantum critical point of particular interest is found at the zero-temperature onset of spin-density wave order in two-dimensional metals. The appropriate low-energy theory poses an exceptionally hard problem to analytic theory, therefore the unbiased and controlled numerical approach pursued in this thesis provides important contributions on the road to comprehensive understanding. After discussing the phenomenology of quantum criticality, a sign-problem-free determinantal quantum Monte Carlo approach is introduced and an extensive toolbox of numerical methods is described in a self-contained way. By the means of large-scale computer simulations we have solved a lattice realization of the universal effective theory of interest. The finite-temperature phase diagram, showing both a quasi-long-range spin-density wave ordered phase and a d-wave superconducting dome, is discussed in its entirety. Close to the quantum phase transition we find evidence for unusual scaling of the order parameter correlations and for non-Fermi liquid behavior at isolated hot spots on the Fermi surface.

  15. Combinatorial nuclear level-density model

    International Nuclear Information System (INIS)

    Uhrenholt, H.; Åberg, S.; Dobrowolski, A.; Døssing, Th.; Ichikawa, T.; Möller, P.

    2013-01-01

    A microscopic nuclear level-density model is presented. The model is a completely combinatorial (micro-canonical) model based on the folded-Yukawa single-particle potential and includes explicit treatment of pairing, rotational and vibrational states. The microscopic character of all states enables extraction of level-distribution functions with respect to pairing gaps, parity and angular momentum. The results of the model are compared to available experimental data: level spacings at neutron separation energy, data on total level-density functions from the Oslo method, cumulative level densities from low-lying discrete states, and data on parity ratios. Spherical and deformed nuclei follow basically different coupling schemes, and we focus on deformed nuclei

  16. A histogram-free multicanonical Monte Carlo algorithm for the construction of analytical density of states

    Energy Technology Data Exchange (ETDEWEB)

    Eisenbach, Markus [ORNL; Li, Ying Wai [ORNL

    2017-06-01

    We report a new multicanonical Monte Carlo (MC) algorithm to obtain the density of states (DOS) for physical systems with continuous state variables in statistical mechanics. Our algorithm is able to obtain an analytical form for the DOS expressed in a chosen basis set, instead of a numerical array of finite resolution as in previous variants of this class of MC methods such as the multicanonical (MUCA) sampling and Wang-Landau (WL) sampling. This is enabled by storing the visited states directly in a data set and avoiding the explicit collection of a histogram. This practice also has the advantage of avoiding undesirable artificial errors caused by the discretization and binning of continuous state variables. Our results show that this scheme is capable of obtaining converged results with a much reduced number of Monte Carlo steps, leading to a significant speedup over existing algorithms.

  17. Level density from realistic nuclear potentials

    International Nuclear Information System (INIS)

    Calboreanu, A.

    2006-01-01

    Nuclear level density of some nuclei is calculated using a realistic set of single particle states (sps). These states are derived from the parameterization of nuclear potentials that describe the observed sps over a large number of nuclei. This approach has the advantage that one can infer level density for nuclei that are inaccessible for a direct study, but are very important in astrophysical processes such as those close to the drip lines. Level densities at high excitation energies are very sensitive to the actual set of sps. The fact that the sps spectrum is finite has extraordinary consequences upon nuclear reaction yields due to the leveling-off of the level density at extremely high excitation energies wrongly attributed so far to other nuclear effects. Single-particle level density parameter a parameter is extracted by fitting the calculated densities to the standard Bethe formula

  18. Level Densities and Radiative Strength Functions in 56FE and 57FE

    Energy Technology Data Exchange (ETDEWEB)

    Tavukcu, Emel [North Carolina State Univ., Raleigh, NC (United States)

    2002-12-10

    Understanding nuclear level densities and radiative strength functions is important for pure and applied nuclear physics. Recently, the Oslo Cyclotron Group has developed an experimental method to extract level densities and radiative strength functions simultaneously from the primary γ rays after a light-ion reaction. A primary γ-ray spectrum represents the γ-decay probability distribution. The Oslo method is based on the Axel-Brink hypothesis, according to which the primary γ-ray spectrum is proportional to the product of the level density at the final energy and the radiative strength function. The level density and the radiative strength function are fit to the experimental primary γ-ray spectra, and then normalized to known data. The method works well for heavy nuclei. The present measurements extend the Oslo method to the lighter mass nuclei 56Fe and 57Fe. The experimental level densities in 56Fe and 57Fe reveal step structure. This step structure is a signature for nucleon pair breaking. The predicted pairing gap parameter is in good agreement with the step corresponding to the first pair breaking. Thermodynamic quantities for 56Fe and 57Fe are derived within the microcanonical and canonical ensembles using the experimental level densities. Energy-temperature relations are considered using caloric curves and probability density functions. The differences between the thermodynamics of small and large systems are emphasized. The experimental heat capacities are compared with the recent theoretical calculations obtained in the Shell Model Monte Carlo method. Radiative strength functions in 56Fe and 57Fe have surprisingly high values at low γ-ray energies. This behavior has not been observed for heavy nuclei, but has been observed in other light- and medium-mass nuclei. The origin of this low γ-ray energy effect remains unknown.

  19. A density functional and quantum Monte Carlo study of glutamic acid in vacuo and in a dielectric continuum medium

    NARCIS (Netherlands)

    Floris, F.; Filippi, Claudia; Amovilli, C.

    2012-01-01

    We present density functional theory (DFT) and quantum Monte Carlo (QMC) calculations of the glutamic acid and glutamate ion in vacuo and in various dielectric continuum media within the polarizable continuum model (PCM). In DFT, we employ the integral equation formalism variant of PCM while, in

  20. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    Energy Technology Data Exchange (ETDEWEB)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu [Department of Physics and Astronomy, University of British Columbia, Vancouver V5Z 1L8 (Canada); Celler, Anna [Department of Radiology, University of British Columbia, Vancouver V5Z 1L8 (Canada)

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming the same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume (D90

  1. A method for tuning parameters of Monte Carlo generators and a determination of the unintegrated gluon density

    International Nuclear Information System (INIS)

    Bacchetta, Alessandro; Jung, Hannes; Kutak, Krzysztof

    2010-02-01

    A method for tuning parameters in Monte Carlo generators is described and applied to a specific case. The method works in the following way: each observable is generated several times using different values of the parameters to be tuned. The output is then approximated by some analytic form to describe the dependence of the observables on the parameters. This approximation is used to find the values of the parameter that give the best description of the experimental data. This results in significantly faster fitting compared to an approach in which the generator is called iteratively. As an application, we employ this method to fit the parameters of the unintegrated gluon density used in the Cascade Monte Carlo generator, using inclusive deep inelastic data measured by the H1 Collaboration. We discuss the results of the fit, its limitations, and its strong points. (orig.)

  2. Relative power density distribution calculations of the Kori unit 1 pressurized water reactor with full-scope explicit modeling of monte carlo simulation

    International Nuclear Information System (INIS)

    Kim, J. O.; Kim, J. K.

    1997-01-01

    Relative power density distributions of the Kori unit 1 pressurized water reactor calculated by Monte Carlo modeling with the MCNP code. The Kori unit 1 core is modeled on a three-dimensional representation of the one-eighth of the reactor in-vessel component with reflective boundaries at 0 and 45 degrees. The axial core model is based on half core symmetry and is divided into four axial segments. Fission reaction density in each rod is calculated by following 100 cycles with 5,000 test neutrons in each cycle after starting with a localized neutron source and ten noncontributing settle cycles. Relative assembly power distributions are calculated from fission reaction densities of rods in assembly. After 100 cycle calculations, the system coverages to a κ value of 1.00039 ≥ 0.00084. Relative assembly power distribution is nearly the same with that of the Kori unit 1 FSAR. Applicability of the full-scope Monte Carlo simulation in the power distribution calculation is examined by the relative root mean square error of 2.159%. (author)

  3. Global and local level density models

    International Nuclear Information System (INIS)

    Koning, A.J.; Hilaire, S.; Goriely, S.

    2008-01-01

    Four different level density models, three phenomenological and one microscopic, are consistently parameterized using the same set of experimental observables. For each of the phenomenological models, the Constant Temperature Model, the Back-shifted Fermi gas Model and the Generalized Superfluid Model, a version without and with explicit collective enhancement is considered. Moreover, a recently published microscopic combinatorial model is compared with the phenomenological approaches and with the same set of experimental data. For each nuclide for which sufficient experimental data exists, a local level density parameterization is constructed for each model. Next, these local models have helped to construct global level density prescriptions, to be used for cases for which no experimental data exists. Altogether, this yields a collection of level density formulae and parameters that can be used with confidence in nuclear model calculations. To demonstrate this, a large-scale validation with experimental discrete level schemes and experimental cross sections and neutron emission spectra for various different reaction channels has been performed

  4. Systematics of nuclear level density parameters

    International Nuclear Information System (INIS)

    Bucurescu, Dorel; Egidy, Till von

    2005-01-01

    The level density parameters for the back-shifted Fermi gas (both without and with energy-dependent level density parameter) and the constant temperature models have been determined for 310 nuclei between 18 F and 251 Cf by fitting the complete level schemes at low excitation energies and the s-wave neutron resonance spacings at the neutron binding energies. Simple formulae are proposed for the description of the two parameters of each of these models, which involve only quantities available from the mass tables. These formulae may constitute a reliable tool for extrapolating to nuclei far from stability, where nuclear level densities cannot be measured

  5. Level densities in nuclear physics

    International Nuclear Information System (INIS)

    Beckerman, M.

    1978-01-01

    In the independent-particle model nucleons move independently in a central potential. There is a well-defined set of single- particle orbitals, each nucleon occupies one of these orbitals subject to Fermi statistics, and the total energy of the nucleus is equal to the sum of the energies of the individual nucleons. The basic question is the range of validity of this Fermi gas description and, in particular, the roles of the residual interactions and collective modes. A detailed examination of experimental level densities in light-mass system is given to provide some insight into these questions. Level densities over the first 10 MeV or so in excitation energy as deduced from neutron and proton resonances data and from spectra of low-lying bound levels are discussed. To exhibit some of the salient features of these data comparisons to independent-particle (shell) model calculations are presented. Shell structure is predicted to manifest itself through discontinuities in the single-particle level density at the Fermi energy and through variatons in the occupancy of the valence orbitals. These predictions are examined through combinatorial calculations performed with the Grover [Phys. Rev., 157, 832(1967), 185 1303(1969)] odometer method. Before the discussion of the experimenta results, statistical mechanical level densities for spherical nuclei are reviewed. After consideration of deformed nuclei, the conclusions resulting from this work are drawn. 7 figures, 3 tables

  6. Geometry and Dynamics for Markov Chain Monte Carlo

    Science.gov (United States)

    Barp, Alessandro; Briol, François-Xavier; Kennedy, Anthony D.; Girolami, Mark

    2018-03-01

    Markov Chain Monte Carlo methods have revolutionised mathematical computation and enabled statistical inference within many previously intractable models. In this context, Hamiltonian dynamics have been proposed as an efficient way of building chains which can explore probability densities efficiently. The method emerges from physics and geometry and these links have been extensively studied by a series of authors through the last thirty years. However, there is currently a gap between the intuitions and knowledge of users of the methodology and our deep understanding of these theoretical foundations. The aim of this review is to provide a comprehensive introduction to the geometric tools used in Hamiltonian Monte Carlo at a level accessible to statisticians, machine learners and other users of the methodology with only a basic understanding of Monte Carlo methods. This will be complemented with some discussion of the most recent advances in the field which we believe will become increasingly relevant to applied scientists.

  7. Study of nuclear level densities for exotic nuclei

    International Nuclear Information System (INIS)

    Nasrabadi, M. N.; Sepiani, M.

    2012-01-01

    Nuclear level density is one of the properties of nuclei with widespread applications in astrophysics and nuclear medicine. Since there has been little experimental and theoretical research on the study of nuclei which are far from stability line, studying nuclear level density for these nuclei is of crucial importance. Also, as nuclear level density is an important input for nuclear research codes, hence studying the methods for calculation of this parameter is essential. Besides introducing various methods and models for calculating nuclear level density for practical applications, we used exact spectra distribution (SPDM) for determining nuclear level density of two neutron and proton enriched exotic nuclei with the same mass number.

  8. Monte carlo simulation for soot dynamics

    KAUST Repository

    Zhou, Kun

    2012-01-01

    A new Monte Carlo method termed Comb-like frame Monte Carlo is developed to simulate the soot dynamics. Detailed stochastic error analysis is provided. Comb-like frame Monte Carlo is coupled with the gas phase solver Chemkin II to simulate soot formation in a 1-D premixed burner stabilized flame. The simulated soot number density, volume fraction, and particle size distribution all agree well with the measurement available in literature. The origin of the bimodal distribution of particle size distribution is revealed with quantitative proof.

  9. Shell model the Monte Carlo way

    International Nuclear Information System (INIS)

    Ormand, W.E.

    1995-01-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined

  10. Shell model the Monte Carlo way

    Energy Technology Data Exchange (ETDEWEB)

    Ormand, W.E.

    1995-03-01

    The formalism for the auxiliary-field Monte Carlo approach to the nuclear shell model is presented. The method is based on a linearization of the two-body part of the Hamiltonian in an imaginary-time propagator using the Hubbard-Stratonovich transformation. The foundation of the method, as applied to the nuclear many-body problem, is discussed. Topics presented in detail include: (1) the density-density formulation of the method, (2) computation of the overlaps, (3) the sign of the Monte Carlo weight function, (4) techniques for performing Monte Carlo sampling, and (5) the reconstruction of response functions from an imaginary-time auto-correlation function using MaxEnt techniques. Results obtained using schematic interactions, which have no sign problem, are presented to demonstrate the feasibility of the method, while an extrapolation method for realistic Hamiltonians is presented. In addition, applications at finite temperature are outlined.

  11. Imaginary time density-density correlations for two-dimensional electron gases at high density

    Energy Technology Data Exchange (ETDEWEB)

    Motta, M.; Galli, D. E. [Dipartimento di Fisica, Università degli Studi di Milano, Via Celoria 16, 20133 Milano (Italy); Moroni, S. [IOM-CNR DEMOCRITOS National Simulation Center and SISSA, Via Bonomea 265, 34136 Trieste (Italy); Vitali, E. [Department of Physics, College of William and Mary, Williamsburg, Virginia 23187-8795 (United States)

    2015-10-28

    We evaluate imaginary time density-density correlation functions for two-dimensional homogeneous electron gases of up to 42 particles in the continuum using the phaseless auxiliary field quantum Monte Carlo method. We use periodic boundary conditions and up to 300 plane waves as basis set elements. We show that such methodology, once equipped with suitable numerical stabilization techniques necessary to deal with exponentials, products, and inversions of large matrices, gives access to the calculation of imaginary time correlation functions for medium-sized systems. We discuss the numerical stabilization techniques and the computational complexity of the methodology and we present the limitations related to the size of the systems on a quantitative basis. We perform the inverse Laplace transform of the obtained density-density correlation functions, assessing the ability of the phaseless auxiliary field quantum Monte Carlo method to evaluate dynamical properties of medium-sized homogeneous fermion systems.

  12. Density anomaly of charged hard spheres of different diameters in a mixture with core-softened model solvent. Monte Carlo simulation results

    Directory of Open Access Journals (Sweden)

    B. Hribar-Lee

    2013-01-01

    Full Text Available Very recently the effect of equisized charged hard sphere solutes in a mixture with core-softened fluid model on the structural and thermodynamic anomalies of the system has been explored in detail by using Monte Carlo simulations and integral equations theory (J. Chem. Phys., Vol. 137, 244502 (2012. Our objective of the present short work is to complement this study by considering univalent ions of unequal diameters in a mixture with the same soft-core fluid model. Specifically, we are interested in the analysis of changes of the temperature of maximum density (TMD lines with ion concentration for three model salt solutes, namely sodium chloride, potassium chloride and rubidium chloride models. We resort to Monte Carlo simulations for this purpose. Our discussion also involves the dependences of the pair contribution to excess entropy and of constant volume heat capacity on the temperature of maximum density line. Some examples of the microscopic structure of mixtures in question in terms of pair distributions functions are given in addition.

  13. Moments Method for Shell-Model Level Density

    International Nuclear Information System (INIS)

    Zelevinsky, V; Horoi, M; Sen'kov, R A

    2016-01-01

    The modern form of the Moments Method applied to the calculation of the nuclear shell-model level density is explained and examples of the method at work are given. The calculated level density practically exactly coincides with the result of full diagonalization when the latter is feasible. The method provides the pure level density for given spin and parity with spurious center-of-mass excitations subtracted. The presence and interplay of all correlations leads to the results different from those obtained by the mean-field combinatorics. (paper)

  14. Isotopic depletion with Monte Carlo

    International Nuclear Information System (INIS)

    Martin, W.R.; Rathkopf, J.A.

    1996-06-01

    This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation

  15. Monte Carlo Simulation for Statistical Decay of Compound Nucleus

    Directory of Open Access Journals (Sweden)

    Chadwick M.B.

    2012-02-01

    Full Text Available We perform Monte Carlo simulations for neutron and γ-ray emissions from a compound nucleus based on the Hauser-Feshbach statistical theory. This Monte Carlo Hauser-Feshbach (MCHF method calculation, which gives us correlated information between emitted particles and γ-rays. It will be a powerful tool in many applications, as nuclear reactions can be probed in a more microscopic way. We have been developing the MCHF code, CGM, which solves the Hauser-Feshbach theory with the Monte Carlo method. The code includes all the standard models that used in a standard Hauser-Feshbach code, namely the particle transmission generator, the level density module, interface to the discrete level database, and so on. CGM can emit multiple neutrons, as long as the excitation energy of the compound nucleus is larger than the neutron separation energy. The γ-ray competition is always included at each compound decay stage, and the angular momentum and parity are conserved. Some calculations for a fission fragment 140Xe are shown as examples of the MCHF method, and the correlation between the neutron and γ-ray is discussed.

  16. Application of gamma radiation backscattering in determining density and Zsub(eff) of scattering material Monte Carlo optimization of configuration

    International Nuclear Information System (INIS)

    Cechak, T.

    1982-01-01

    Applying Gardner's method of double evaluation one detector should be positioned such that its response should be independent of the material density and the second detector should be positioned so as to maximize changes in response due to density changes. The experimental scanning for optimal energy is extremely time demanding. A program was written based on the Monte Carlo method which solves the problem of error magnitude in case the computation of gamma radiation backscattering neglects multiply scattered photons, the problem of how this error depends on the atomic number of the scattering material as well as the problem of whether the representation of individual scatterings in the spectrum of backscattered photons depends on the positioning of the detector. 42 detectors, 8 types of material and 10 different density values were considered. The computed dependences are given graphically. (M.D.)

  17. One-body density matrix and the momentum density in 4He and 3He

    International Nuclear Information System (INIS)

    Whitlock, P.A.; Panoff, R.M.

    1984-01-01

    The one-body density matrix and the momentum density for liquid and solid 4 He, determined from Green's Function Monte Carlo calculations using the HFDHE2 pair potential, are described. Values for the condensate fraction and the kinetic energy derived from these calculations are given and compared to recent experimental results. Preliminary results from variational Monte Carlo calculations on n(r) and n(k) for liquid 3 He are also reported

  18. Density functional theory versus quantum Monte Carlo simulations of Fermi gases in the optical-lattice arena★

    Science.gov (United States)

    Pilati, Sebastiano; Zintchenko, Ilia; Troyer, Matthias; Ancilotto, Francesco

    2018-04-01

    We benchmark the ground state energies and the density profiles of atomic repulsive Fermi gases in optical lattices (OLs) computed via density functional theory (DFT) against the results of diffusion Monte Carlo (DMC) simulations. The main focus is on a half-filled one-dimensional OLs, for which the DMC simulations performed within the fixed-node approach provide unbiased results. This allows us to demonstrate that the local spin-density approximation (LSDA) to the exchange-correlation functional of DFT is very accurate in the weak and intermediate interactions regime, and also to underline its limitations close to the strongly-interacting Tonks-Girardeau limit and in very deep OLs. We also consider a three-dimensional OL at quarter filling, showing also in this case the high accuracy of the LSDA in the moderate interaction regime. The one-dimensional data provided in this study may represent a useful benchmark to further develop DFT methods beyond the LSDA and they will hopefully motivate experimental studies to accurately measure the equation of state of Fermi gases in higher-dimensional geometries. Supplementary material in the form of one pdf file available from the Journal web page at http://https://doi.org/10.1140/epjb/e2018-90021-1.

  19. Calculation of the level density parameter using semi-classical approach

    International Nuclear Information System (INIS)

    Canbula, B.; Babacan, H.

    2011-01-01

    The level density parameters (level density parameter a and energy shift δ) for back-shifted Fermi gas model have been determined for 1136 nuclei for which complete level scheme is available. Level density parameter is calculated by using the semi-classical single particle level density, which can be obtained analytically through spherical harmonic oscillator potential. This method also enables us to analyze the Coulomb potential's effect on the level density parameter. The dependence of this parameter on energy has been also investigated. Another parameter, δ, is determined by fitting of the experimental level scheme and the average resonance spacings for 289 nuclei. Only level scheme is used for optimization procedure for remaining 847 nuclei. Level densities for some nuclei have been calculated by using these parameter values. Obtained results have been compared with the experimental level scheme and the resonance spacing data.

  20. Nuclear Level densities from drip line to drip line

    International Nuclear Information System (INIS)

    Hilaire, S.; Goriely, S.

    2007-01-01

    New energy-, spin-, parity-dependent level densities based on the microscopic combinatorial model are presented and compared with available experimental data as well as with other nuclear level densities usually employed in nuclear reaction codes. These microscopic level densities are made available in a table format for nearly 8500 nuclei

  1. Large model-space calculation of the nuclear level density parameter

    International Nuclear Information System (INIS)

    Agrawal, B.K.; Samaddar, S.K.; De, J.N.; Shlomo, S.

    1998-01-01

    Recently, several attempts have been made to obtain nuclear level density (ρ) and level density parameter (α) within the microscopic approaches based on path integral representation of the partition function. The results for the inverse level density parameter K es and the level density as a function of excitation energy are presented

  2. Tables of nuclear level density parameters

    International Nuclear Information System (INIS)

    Chatterjee, A.; Ghosh, S.K.; Majumdar, H.

    1976-03-01

    The Renormalized Gas Model (RGM) has been used to calculate single particle level density parameters for more than 2000 nucleides over the range 9<=Z<=126 (15<=A<=338). Three separate tables present the elements on or near the valley of beta stability, neutron-rich fission fragment nucleides, and transitional nuclei, actinides and light-mass super heavy elements. Each table identifies the nucleus in terms of Z and N and presents the RGM deformation energy of binding, the total RGM structural energy correction over the free gas Fermi surface, and the level density parameter

  3. Level densities in rare earth nuclei

    International Nuclear Information System (INIS)

    Siem, S.; Tveter, T.S.; Bergholt, L.; Guttormsen, M.; Melby, E.; Rekstad, J.

    1997-01-01

    An iterative procedure for simultaneous extraction of fine structure in the level density and the γ-ray strength function from a set of primary γ-ray spectra has been developed. Data from the reactions 163 Dy(3He,αγ) 172 Dy and 173 Yb(3He,αγ) 172 Yb reveals step like enhancements in the level density in the region below 5 MeV and peaks in the γ-ray strength function at low γ-energy (E γ ∼ 2 - 3.5 MeV). Tentative physical interpretations are presented. (author)

  4. Multilevel Monte Carlo methods using ensemble level mixed MsFEM for two-phase flow and transport simulations

    KAUST Repository

    Efendiev, Yalchin R.

    2013-08-21

    In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed multiscale finite element methods and (2) a novel use of mixed multiscale finite element methods within multilevel Monte Carlo techniques to speed up the computations. The main idea of ensemble level multiscale methods is to construct local multiscale basis functions that can be used for any member of the ensemble. In this paper, we consider two ensemble level mixed multiscale finite element methods: (1) the no-local-solve-online ensemble level method (NLSO); and (2) the local-solve-online ensemble level method (LSO). The first approach was proposed in Aarnes and Efendiev (SIAM J. Sci. Comput. 30(5):2319-2339, 2008) while the second approach is new. Both mixed multiscale methods use a number of snapshots of the permeability media in generating multiscale basis functions. As a result, in the off-line stage, we construct multiple basis functions for each coarse region where basis functions correspond to different realizations. In the no-local-solve-online ensemble level method, one uses the whole set of precomputed basis functions to approximate the solution for an arbitrary realization. In the local-solve-online ensemble level method, one uses the precomputed functions to construct a multiscale basis for a particular realization. With this basis, the solution corresponding to this particular realization is approximated in LSO mixed multiscale finite element method (MsFEM). In both approaches, the accuracy of the method is related to the number of snapshots computed based on different realizations that one uses to precompute a multiscale basis. In this paper, ensemble level multiscale methods are used in multilevel Monte Carlo methods (Giles 2008a, Oper.Res. 56(3):607-617, b). In multilevel Monte Carlo methods, more accurate

  5. Recent experimental results on level densities for compound reaction calculations

    International Nuclear Information System (INIS)

    Voinov, A.V.

    2012-01-01

    There is a problem related to the choice of the level density input for Hauser-Feshbach model calculations. Modern computer codes have several options to choose from but it is not clear which of them has to be used in some particular cases. Availability of many options helps to describe existing experimental data but it creates problems when it comes to predictions. Traditionally, different level density systematics are based on experimental data from neutron resonance spacing which are available for a limited spin interval and one parity only. On the other hand reaction cross section calculations use the total level density. This can create large uncertainties when converting the neutron resonance spacing to the total level density that results in sizable uncertainties in cross section calculations. It is clear now that total level densities need to be studied experimentally in a systematic manner. Such information can be obtained only from spectra of compound nuclear reactions. The question is does level densities obtained from compound nuclear reactions keep the same regularities as level densities obtained from neutron resonances- Are they consistent- We measured level densities of 59-64 Ni isotopes from proton evaporation spectra of 6,7 Li induced reactions. Experimental data are presented. Conclusions of how level density depends on the neutron number and on the degree of proximity to the closed shell ( 56 Ni) are drawn. The level density parameters have been compared with parameters obtained from the analysis of neutron resonances and from model predictions

  6. Systematics of the level density parameters

    International Nuclear Information System (INIS)

    Ignatyuk, A.V.; Istekov, K.K.; Smirenkin, G.N.

    1977-01-01

    The excitation energy dependence of nucleus energy-level density is phenomenologically systematized in terms of the Fermi gas model. The analysis has been conducted in the atomic mass number range of A(>=)150, where the collective effects are mostly pronounced. The density parameter a(U) is obtained using data on neutron resonances. To depict energy spectra of nuclear states in the Fermi gas model (1) the contributions from collective rotational and vibrational modes (2), as well as from pair correlations (3) are also taken into account. It is shown, that at excitation energies close to the neutron binding energy all three systematics of a(U) yield practically the same energy-level densities. At high energies only the (2) and (3) systematics are valid, and at energies lower than the neutron binding energy only the last systematics will be adequate

  7. Generalized Freud's equation and level densities with polynomial

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 81; Issue 2. Generalized Freud's equation and level densities with polynomial potential. Akshat Boobna Saugata Ghosh. Research Articles Volume 81 ... Keywords. Orthogonal polynomial; Freud's equation; Dyson–Mehta method; methods of resolvents; level density.

  8. Multilevel Monte Carlo methods using ensemble level mixed MsFEM for two-phase flow and transport simulations

    KAUST Repository

    Efendiev, Yalchin R.; Iliev, Oleg; Kronsbein, C.

    2013-01-01

    In this paper, we propose multilevel Monte Carlo (MLMC) methods that use ensemble level mixed multiscale methods in the simulations of multiphase flow and transport. The contribution of this paper is twofold: (1) a design of ensemble level mixed

  9. Monte Carlo simulations for plasma physics

    International Nuclear Information System (INIS)

    Okamoto, M.; Murakami, S.; Nakajima, N.; Wang, W.X.

    2000-07-01

    Plasma behaviours are very complicated and the analyses are generally difficult. However, when the collisional processes play an important role in the plasma behaviour, the Monte Carlo method is often employed as a useful tool. For examples, in neutral particle injection heating (NBI heating), electron or ion cyclotron heating, and alpha heating, Coulomb collisions slow down high energetic particles and pitch angle scatter them. These processes are often studied by the Monte Carlo technique and good agreements can be obtained with the experimental results. Recently, Monte Carlo Method has been developed to study fast particle transports associated with heating and generating the radial electric field. Further it is applied to investigating the neoclassical transport in the plasma with steep gradients of density and temperatures which is beyong the conventional neoclassical theory. In this report, we briefly summarize the researches done by the present authors utilizing the Monte Carlo method. (author)

  10. Thermodynamics of excited nuclei and nuclear level densities

    International Nuclear Information System (INIS)

    Ramamurthy, V.S.

    1977-01-01

    A review has been made of the different approaches that are being used for a theoretical calculation of nuclear level densities. It is pointed out that while the numerical calculations based on the partition function approach and shell model single particle level schemes have shed important insight into the influence of nuclear shell effects on level densities and its excitation energy dependence and have brought out the inadequacy of the conventional Bethe Formula, these calculations are yet to reach a level where they can be directly used for quantitative comparisons. Some of the important drawbacks of the numerical calculations are also discussed. In this context, a new semi-empirical level density formula is described which while retaining the simplicity of analytical formulae, takes into account nuclear shell effects in a more realistic manner. (author)

  11. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan

    2016-01-01

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  12. Multilevel sequential Monte Carlo samplers

    KAUST Repository

    Beskos, Alexandros

    2016-08-29

    In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . ∞>h0>h1⋯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. © 2016 Elsevier B.V.

  13. Density-based Monte Carlo filter and its applications in nonlinear stochastic differential equation models.

    Science.gov (United States)

    Huang, Guanghui; Wan, Jianping; Chen, Hui

    2013-02-01

    Nonlinear stochastic differential equation models with unobservable state variables are now widely used in analysis of PK/PD data. Unobservable state variables are usually estimated with extended Kalman filter (EKF), and the unknown pharmacokinetic parameters are usually estimated by maximum likelihood estimator. However, EKF is inadequate for nonlinear PK/PD models, and MLE is known to be biased downwards. A density-based Monte Carlo filter (DMF) is proposed to estimate the unobservable state variables, and a simulation-based M estimator is proposed to estimate the unknown parameters in this paper, where a genetic algorithm is designed to search the optimal values of pharmacokinetic parameters. The performances of EKF and DMF are compared through simulations for discrete time and continuous time systems respectively, and it is found that the results based on DMF are more accurate than those given by EKF with respect to mean absolute error. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Recent advances in measurements of the nuclear level density

    International Nuclear Information System (INIS)

    John, Bency

    2007-01-01

    A short review of recent advances in measurements of the nuclear level density is given. First results of the inverse level density parameter - angular momentum correlation in a number of nuclei around Z∼50 shell region at an excitation energy around 0.3 MeV/nucleon are presented. Significant variations observed over and above the expected shell corrections are discussed in context of the emerging trends in microscopic calculations of the nuclear level density. (author)

  15. Application of Monte-Carlo method in definition of key categories of most radioactive polluted soil

    International Nuclear Information System (INIS)

    Mahmudov, H.M.; Valibeyova, G.; Jafarov, Y.D.; Musaeva, Sh.Z.

    2006-01-01

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capacities of radiation and data on activity within the boundaries of their individual density of frequency distribution upon corresponding sizes of exposition doses capacities. This procedure repeats for many times using computer and results of each round of calculations create universal density of frequency distribution of exposition doses capacities. The analysis using Monte Carlo method can be carried out at the level of radiation polluted soil categories. The analysis by Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainty in reports. Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report. Relative uncertainty of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of a confidential interval are asymmetric. It is important to determine key categories of radiation polluted soil to establish priorities to use reports of resources available for preparation and to prepare possible estimations for the most significant categories of sources. Usage of the notion u ncertainty i n reports also allows to set threshold value for a key category of sources, if it is necessary, for exact reflection of 90 percent uncertainty in reports. According to radiation safety norms level of radiation background exceeding 33 mkR/hour is considered dangerous. By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of

  16. Application of Monte-Carlo method in definition of key categories of most radioactive polluted soil

    Energy Technology Data Exchange (ETDEWEB)

    Mahmudov, H M; Valibeyova, G; Jafarov, Y D; Musaeva, Sh Z [Institute of Radiation Problems, Azerbaijan National Academy of Sciences, Baku (Azerbaijan)

    2006-11-15

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capacities of radiation and data on activity within the boundaries of their individual density of frequency distribution upon corresponding sizes of exposition doses capacities. This procedure repeats for many times using computer and results of each round of calculations create universal density of frequency distribution of exposition doses capacities. The analysis using Monte Carlo method can be carried out at the level of radiation polluted soil categories. The analysis by Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainty in reports. Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report. Relative uncertainty of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of a confidential interval are asymmetric. It is important to determine key categories of radiation polluted soil to establish priorities to use reports of resources available for preparation and to prepare possible estimations for the most significant categories of sources. Usage of the notion {sup u}ncertainty{sup i}n reports also allows to set threshold value for a key category of sources, if it is necessary, for exact reflection of 90 percent uncertainty in reports. According to radiation safety norms level of radiation background exceeding 33 mkR/hour is considered dangerous. By calted Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of

  17. Effect of vibrational states on nuclear level density

    International Nuclear Information System (INIS)

    Plujko, V. A.; Gorbachenko, O. M.

    2007-01-01

    Simple methods to calculate a vibrational enhancement factor of a nuclear level density with allowance for damping of collective state are considered. The results of the phenomenological approach and the microscopic quasiparticle-phonon model are compared. The practical method of calculation of a vibrational enhancement factor and level density parameters is recommended

  18. Study of nuclear level density parameter and its temperature dependence

    International Nuclear Information System (INIS)

    Nasrabadi, M. N.; Behkami, A. N.

    2000-01-01

    The nuclear level density ρ is the basic ingredient required for theoretical studies of nuclear reaction and structure. It describes the statistical nuclear properties and is expressed as a function of various constants of motion such as number of particles, excitation energy and angular momentum. In this work the energy and spin dependence of nuclear level density will be presented and discussed. In addition the level density parameter α will be extracted from this level density information, and its temperature and mass dependence will be obtained

  19. Nuclear level density parameter 's dependence on angular momentum

    International Nuclear Information System (INIS)

    Aggarwal, Mamta; Kailas, S.

    2009-01-01

    Nuclear level densities represent a very important ingredient in the statistical Model calculations of nuclear reaction cross sections and help to understand the microscopic features of the excited nuclei. Most of the earlier experimental nuclear level density measurements are confined to low excitation energy and low spin region. A recent experimental investigation of nuclear level densities in high excitation energy and angular momentum domain with some interesting results on inverse level density parameter's dependence on angular momentum in the region around Z=50 has motivated us to study and analyse these experimental results in a microscopic theoretical framework. In the experiment, heavy ion fusion reactions are used to populate the excited and rotating nuclei and measured the α particle evaporation spectra in coincidence with ray multiplicity. Residual nuclei are in the range of Z R 48-55 with excitation energy range 30 to 40 MeV and angular momentum in 10 to 25. The inverse level density parameter K is found to be in the range of 9.0 - 10.5 with some exceptions

  20. Serum osteoprotegerin levels and mammographic density among high-risk women.

    Science.gov (United States)

    Moran, Olivia; Zaman, Tasnim; Eisen, Andrea; Demsky, Rochelle; Blackmore, Kristina; Knight, Julia A; Elser, Christine; Ginsburg, Ophira; Zbuk, Kevin; Yaffe, Martin; Narod, Steven A; Salmena, Leonardo; Kotsopoulos, Joanne

    2018-06-01

    Mammographic density is a risk factor for breast cancer but the mechanism behind this association is unclear. The receptor activator of nuclear factor κB (RANK)/RANK ligand (RANKL) pathway has been implicated in the development of breast cancer. Given the role of RANK signaling in mammary epithelial cell proliferation, we hypothesized this pathway may also be associated with mammographic density. Osteoprotegerin (OPG), a decoy receptor for RANKL, is known to inhibit RANK signaling. Thus, it is of interest to evaluate whether OPG levels modify breast cancer risk through mammographic density. We quantified serum OPG levels in 57 premenopausal and 43 postmenopausal women using an enzyme-linked immunosorbent assay (ELISA). Cumulus was used to measure percent density, dense area, and non-dense area for each mammographic image. Subjects were classified into high versus low OPG levels based on the median serum OPG level in the entire cohort (115.1 pg/mL). Multivariate models were used to assess the relationship between serum OPG levels and the measures of mammographic density. Serum OPG levels were not associated with mammographic density among premenopausal women (P ≥ 0.42). Among postmenopausal women, those with low serum OPG levels had higher mean percent mammographic density (20.9% vs. 13.7%; P = 0.04) and mean dense area (23.4 cm 2 vs. 15.2 cm 2 ; P = 0.02) compared to those with high serum OPG levels after covariate adjustment. These findings suggest that low OPG levels may be associated with high mammographic density, particularly in postmenopausal women. Targeting RANK signaling may represent a plausible, non-surgical prevention option for high-risk women with high mammographic density, especially those with low circulating OPG levels.

  1. Monte Carlo Simulations of Electron Energy-Loss Spectra with the Addition of Fine Structure from Density Functional Theory Calculations.

    Science.gov (United States)

    Attarian Shandiz, Mohammad; Guinel, Maxime J-F; Ahmadi, Majid; Gauvin, Raynald

    2016-02-01

    A new approach is presented to introduce the fine structure of core-loss excitations into the electron energy-loss spectra of ionization edges by Monte Carlo simulations based on an optical oscillator model. The optical oscillator strength is refined using the calculated electron energy-loss near-edge structure by density functional theory calculations. This approach can predict the effects of multiple scattering and thickness on the fine structure of ionization edges. In addition, effects of the fitting range for background removal and the integration range under the ionization edge on signal-to-noise ratio are investigated.

  2. Nuclear level density of 166Er with static deformation

    International Nuclear Information System (INIS)

    Nasrabadi, M.N.

    2006-01-01

    The level densities of 166 Er is calculated using the microscopic theory of interacting fermions and is compared with experimental. It is concluded that the data can be reproduced with level density formalism for nuclei with static deformation

  3. Monte Carlo study of voxel S factor dependence on tissue density and atomic composition

    Energy Technology Data Exchange (ETDEWEB)

    Amato, Ernesto, E-mail: eamato@unime.it [University of Messina, Department of Biomedical Sciences and of Morphologic and Functional Imaging, Section of Radiological Sciences, via Consolare Valeria, 1, I-98125 Messina (Italy); Italiano, Antonio [INFN – Istituto Nazionale di Fisica Nucleare, Gruppo Collegato di Messina (Italy); Baldari, Sergio [University of Messina, Department of Biomedical Sciences and of Morphologic and Functional Imaging, Section of Radiological Sciences, via Consolare Valeria, 1, I-98125 Messina (Italy)

    2013-11-21

    Voxel dosimetry is a common approach to the internal dosimetry of non-uniform activity distributions in nuclear medicine therapies with radiopharmaceuticals and in the estimation of the radiation hazard due to internal contamination of radionuclides. Aim of the present work is to extend our analytical approach for the calculation of voxel S factors to materials different from the soft tissue. We used a Monte Carlo simulation in GEANT4 of a voxelized region of each material in which the source of monoenergetic electrons or photons was uniformly distributed within the central voxel, and the energy deposition was scored over the surrounding 11×11×11 voxels. Voxel S factors were obtained for the following standard ICRP materials: Adipose tissue, Bone cortical, Brain, Lung, Muscle skeletal and Tissue soft with 1 g cm{sup −3} density. Moreover, we considered the standard ICRU materials: Bone compact and Muscle striated. Voxel S factors were represented as a function of the “normalized radius”, defined as the ratio between the source–target voxel distance and the voxel side. We found that voxel S factors and related analytical fit functions are mainly affected by the tissue density, while the material composition gives only a slight contribution to the difference between data series, which is negligible for practical purposes. Our results can help in broadening the dosimetric three-dimensional approach based on voxel S factors to other tissues where diagnostic and therapeutic radionuclides can be taken up and radiation can propagate.

  4. Monte Carlo study of voxel S factor dependence on tissue density and atomic composition

    International Nuclear Information System (INIS)

    Amato, Ernesto; Italiano, Antonio; Baldari, Sergio

    2013-01-01

    Voxel dosimetry is a common approach to the internal dosimetry of non-uniform activity distributions in nuclear medicine therapies with radiopharmaceuticals and in the estimation of the radiation hazard due to internal contamination of radionuclides. Aim of the present work is to extend our analytical approach for the calculation of voxel S factors to materials different from the soft tissue. We used a Monte Carlo simulation in GEANT4 of a voxelized region of each material in which the source of monoenergetic electrons or photons was uniformly distributed within the central voxel, and the energy deposition was scored over the surrounding 11×11×11 voxels. Voxel S factors were obtained for the following standard ICRP materials: Adipose tissue, Bone cortical, Brain, Lung, Muscle skeletal and Tissue soft with 1 g cm −3 density. Moreover, we considered the standard ICRU materials: Bone compact and Muscle striated. Voxel S factors were represented as a function of the “normalized radius”, defined as the ratio between the source–target voxel distance and the voxel side. We found that voxel S factors and related analytical fit functions are mainly affected by the tissue density, while the material composition gives only a slight contribution to the difference between data series, which is negligible for practical purposes. Our results can help in broadening the dosimetric three-dimensional approach based on voxel S factors to other tissues where diagnostic and therapeutic radionuclides can be taken up and radiation can propagate

  5. Excitation energy and angular momentum dependence of the nuclear level densities

    International Nuclear Information System (INIS)

    Razavi, R.; Kakavand, T.; Behkami, A. N.

    2007-01-01

    We have investigated the excitation energy (E) dependence of nuclear level density for Bethe formula and constant temperature model. The level density parameter aa nd the back shifted energy from the Bethe formula are obtained by fitting the complete level schemes. Also the level density parameters from the constant temperature model have been determined for several nuclei. we have shown that the microscopic theory provides more precise information on the nuclear level densities. On the other hand, the spin cut-off parameter and effective moment of inertia are determined by studying of the angular momentum (J) dependence of the nuclear level density, and effective moment of inertia is compared with rigid body value.

  6. Level-density parameter of nuclei at finite temperature

    International Nuclear Information System (INIS)

    Gregoire, C.; Kuo, T.T.S.; Stout, D.B.

    1991-01-01

    The contribution of particle-particle (hole-hole) and of particle-hole ring diagrams to the nuclear level-density parameter at finite temperature is calculated. We first derive the correlated grand potential with the above ring diagrams included to all orders by way of a finite temperature RPA equation. An expression for the correlated level-density parameter is then obtained by differentiating the grand potential. Results obtained for the 40 Ca nucleus with realistic matrix elements derived from the Paris potential are presented. The contribution of the RPA correlations is found to be important, being significantly larger than typical Hartree-Fock results. The temperature dependence of the level-density parameter derived in the present work is generally similar to that obtained in a schematic model. Comparison with available experimental data is discussed. (orig.)

  7. Realistic level densities in fragment emission at high excitation energies

    International Nuclear Information System (INIS)

    Mustafa, M.G.; Blann, M.; Ignatyuk, A.V.

    1993-01-01

    Heavy fragment emission from a 44 100 Ru compound nucleus at 400 and 800 MeV of excitation is analyzed to study the influence of level density models on final yields. An approach is used in which only quasibound shell-model levels are included in calculating level densities. We also test the traditional Fermi gas model for which there is no upper energy limit to the single particle levels. We compare the influence of these two level density models in evaporation calculations of primary fragment excitations, kinetic energies and yields, and on final product yields

  8. Diffusion quantum Monte Carlo and density functional calculations of the structural stability of bilayer arsenene

    Science.gov (United States)

    Kadioglu, Yelda; Santana, Juan A.; Özaydin, H. Duygu; Ersan, Fatih; Aktürk, O. Üzengi; Aktürk, Ethem; Reboredo, Fernando A.

    2018-06-01

    We have studied the structural stability of monolayer and bilayer arsenene (As) in the buckled (b) and washboard (w) phases with diffusion quantum Monte Carlo (DMC) and density functional theory (DFT) calculations. DMC yields cohesive energies of 2.826(2) eV/atom for monolayer b-As and 2.792(3) eV/atom for w-As. In the case of bilayer As, DMC and DFT predict that AA-stacking is the more stable form of b-As, while AB is the most stable form of w-As. The DMC layer-layer binding energies for b-As-AA and w-As-AB are 30(1) and 53(1) meV/atom, respectively. The interlayer separations were estimated with DMC at 3.521(1) Å for b-As-AA and 3.145(1) Å for w-As-AB. A comparison of DMC and DFT results shows that the van der Waals density functional method yields energetic properties of arsenene close to DMC, while the DFT + D3 method closely reproduced the geometric properties from DMC. The electronic properties of monolayer and bilayer arsenene were explored with various DFT methods. The bandgap values vary significantly with the DFT method, but the results are generally qualitatively consistent. We expect the present work to be useful for future experiments attempting to prepare multilayer arsenene and for further development of DFT methods for weakly bonded systems.

  9. Functional approximations to posterior densities: a neural network approach to efficient sampling

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)

    2002-01-01

    textabstractThe performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate

  10. Three-Dimensional Simulation of DRIE Process Based on the Narrow Band Level Set and Monte Carlo Method

    Directory of Open Access Journals (Sweden)

    Jia-Cheng Yu

    2018-02-01

    Full Text Available A three-dimensional topography simulation of deep reactive ion etching (DRIE is developed based on the narrow band level set method for surface evolution and Monte Carlo method for flux distribution. The advanced level set method is implemented to simulate the time-related movements of etched surface. In the meanwhile, accelerated by ray tracing algorithm, the Monte Carlo method incorporates all dominant physical and chemical mechanisms such as ion-enhanced etching, ballistic transport, ion scattering, and sidewall passivation. The modified models of charged particles and neutral particles are epitomized to determine the contributions of etching rate. The effects such as scalloping effect and lag effect are investigated in simulations and experiments. Besides, the quantitative analyses are conducted to measure the simulation error. Finally, this simulator will be served as an accurate prediction tool for some MEMS fabrications.

  11. The management-retrieval code of nuclear level density sub-library (CENPL-NLD)

    International Nuclear Information System (INIS)

    Ge Zhigang; Su Zongdi; Huang Zhongfu; Dong Liaoyuan

    1995-01-01

    The management-retrieval code of the Nuclear Level Density (NLD) is presented. It contains two retrieval ways: single nucleus (SN) and neutron reaction (NR). The latter contains four kinds of retrieval types. This code not only can retrieve level density parameter and the data related to the level density, but also can calculate the relevant data by using different level density parameters and do comparison of the calculated results with related data in order to help user to select level density parameters

  12. Unified model of nuclear mass and level density formulas

    International Nuclear Information System (INIS)

    Nakamura, Hisashi

    2001-01-01

    The objective of present work is to obtain a unified description of nuclear shell, pairing and deformation effects for both ground state masses and level densities, and to find a new set of parameter systematics for both the mass and the level density formulas on the basis of a model for new single-particle state densities. In this model, an analytical expression is adopted for the anisotropic harmonic oscillator spectra, but the shell-pairing correlation are introduced in a new way. (author)

  13. Stochastic estimation of nuclear level density in the nuclear shell model: An application to parity-dependent level density in 58Ni

    Directory of Open Access Journals (Sweden)

    Noritaka Shimizu

    2016-02-01

    Full Text Available We introduce a novel method to obtain level densities in large-scale shell-model calculations. Our method is a stochastic estimation of eigenvalue count based on a shifted Krylov-subspace method, which enables us to obtain level densities of huge Hamiltonian matrices. This framework leads to a successful description of both low-lying spectroscopy and the experimentally observed equilibration of Jπ=2+ and 2− states in 58Ni in a unified manner.

  14. Ultrasonic level, temperature, and density sensor

    International Nuclear Information System (INIS)

    Rogers, S.C.; Miller, G.N.

    1982-01-01

    A sensor has been developed to measure simultaneously the level, temperature, and density of the fluid in which it is immersed. The sensor is a thin, rectangular stainless steel ribbon which acts as a waveguide and is housed in a perforated tube. The waveguide is coupled to a section of magnetostrictive magnetic-coil transducers. These tranducers are excited in an alternating sequence to interrogate the sensor with both torsional ultrasonic waves, utilizing the Wiedemann effect, and extensional ultrasonic waves, using the Joule effect. The measured torsional wave transit time is a function of the density, level, and temperature of the fluid surrounding the waveguide. The measured extensional wave transit time is a function of the temperature of the waveguide only. The sensor is divided into zones by the introduction of reflecting surfaces at measured intervals along its length. Consequently, the transit times from each reflecting surface can be analyzed to yield a temperature profile and a density profile along the length of the sensor. Improvements in acoustic wave dampener and pressure seal designs enhance the compatibility of the probe with high-temperature, high-radiation, water-steam environments and increase the likelihood of survival in such environments. Utilization of a microcomputer to automate data sampling and processing has resulted in improved resolution of the sensor

  15. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  16. MC21 Monte Carlo analysis of the Hoogenboom-Martin full-core PWR benchmark problem - 301

    International Nuclear Information System (INIS)

    Kelly, D.J.; Sutton, Th.M.; Trumbull, T.H.; Dobreff, P.S.

    2010-01-01

    At the 2009 American Nuclear Society Mathematics and Computation conference, Hoogenboom and Martin proposed a full-core PWR model to monitor the improvement of Monte Carlo codes to compute detailed power density distributions. This paper describes the application of the MC21 Monte Carlo code to the analysis of this benchmark model. With the MC21 code, we obtained detailed power distributions over the entire core. The model consisted of 214 assemblies, each made up of a 17x17 array of pins. Each pin was subdivided into 100 axial nodes, thus resulting in over seven million tally regions. Various cases were run to assess the statistical convergence of the model. This included runs of 10 billion and 40 billion neutron histories, as well as ten independent runs of 4 billion neutron histories each. The 40 billion neutron-history calculation resulted in 43% of all regions having a 95% confidence level of 2% or less implying a relative standard deviation of 1%. Furthermore, 99.7% of regions having a relative power density of 1.0 or greater have a similar confidence level. We present timing results that assess the MC21 performance relative to the number of tallies requested. Source convergence was monitored by analyzing plots of the Shannon entropy and eigenvalue versus active cycle. We also obtained an estimate of the dominance ratio. Additionally, we performed an analysis of the error in an attempt to ascertain the validity of the confidence intervals predicted by MC21. Finally, we look forward to the prospect of full core 3-D Monte Carlo depletion by scoping out the required problem size. This study provides an initial data point for the Hoogenboom-Martin benchmark model using a state-of-the-art Monte Carlo code. (authors)

  17. Calculation of power density with MCNP in TRIGA reactor

    International Nuclear Information System (INIS)

    Snoj, L.; Ravnik, M.

    2006-01-01

    Modern Monte Carlo codes (e.g. MCNP) allow calculation of power density distribution in 3-D geometry assuming detailed geometry without unit-cell homogenization. To normalize MCNP calculation by the steady-state thermal power of a reactor, one must use appropriate scaling factors. The description of the scaling factors is not adequately described in the MCNP manual and requires detailed knowledge of the code model. As the application of MCNP for power density calculation in TRIGA reactors has not been reported in open literature, the procedure of calculating power density with MCNP and its normalization to the power level of a reactor is described in the paper. (author)

  18. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-12-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ``fixed-source`` case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (``replicated``) over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  19. Biases in Monte Carlo eigenvalue calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated ( replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here.

  20. Biases in Monte Carlo eigenvalue calculations

    International Nuclear Information System (INIS)

    Gelbard, E.M.

    1992-01-01

    The Monte Carlo method has been used for many years to analyze the neutronics of nuclear reactors. In fact, as the power of computers has increased the importance of Monte Carlo in neutronics has also increased, until today this method plays a central role in reactor analysis and design. Monte Carlo is used in neutronics for two somewhat different purposes, i.e., (a) to compute the distribution of neutrons in a given medium when the neutron source-density is specified, and (b) to compute the neutron distribution in a self-sustaining chain reaction, in which case the source is determined as the eigenvector of a certain linear operator. In (b), then, the source is not given, but must be computed. In the first case (the ''fixed-source'' case) the Monte Carlo calculation is unbiased. That is to say that, if the calculation is repeated (''replicated'') over and over, with independent random number sequences for each replica, then averages over all replicas will approach the correct neutron distribution as the number of replicas goes to infinity. Unfortunately, the computation is not unbiased in the second case, which we discuss here

  1. Level density of random matrices for decaying systems

    International Nuclear Information System (INIS)

    Haake, F.; Izrailev, F.; Saher, D.; Sommers, H.-J.

    1991-01-01

    Analytical and numerical results for the level density of a certain class of random non-Hermitian matrices H=H+iΓ are presented. The conservative part H belongs to the Gaussian orthogonal ensemble while the damping piece Γ is quadratic in Gaussian random numbers and may describe the decay of resonances through various channels. In the limit of a large matrix dimension the level density assumes a surprisingly simple dependence on the relative strength of the damping and the number of channels. 18 refs.; 4 figs

  2. On the evaluation of semiclassical nuclear many-particle many-hole level densities

    International Nuclear Information System (INIS)

    Blin, A.H.; Hiller, B.; Schuck, P.; Yannouleas, C.

    1985-10-01

    An exact general scheme is described to calculate the m-particle n-hole fermion level densities for an arbitrary single particle Hamiltonian taking into account the Pauli exclusion principle. This technique is applied to obtain level densities of the three dimensional isotropic harmonic oscillator semiclassically in the Thomas-Fermi approach. In addition, we study the l-particle l-hole level density of the Woods-Saxon potential. For the harmonic oscillator we analyze the temperature dependence of the linear response function and the influence of pairing correlations on the l-particle l-hole level density. Finally, a Taylor expansion method of the m-particle n-hole level densities is discussed

  3. Ghrelin plasma levels, gastric ghrelin cell density and bone mineral density in women with rheumatoid arthritis.

    Science.gov (United States)

    Maksud, F A N; Kakehasi, A M; Guimarães, M F B R; Machado, C J; Barbosa, A J A

    2017-05-18

    Generalized bone loss can be considered an extra-articular manifestation of rheumatoid arthritis (RA) that may lead to the occurrence of fractures, resulting in decreased quality of life and increased healthcare costs. The peptide ghrelin has demonstrated to positively affect osteoblasts in vitro and has anti-inflammatory actions, but the studies that correlate ghrelin plasma levels and RA have contradictory results. We aimed to evaluate the correlation between total ghrelin plasma levels, density of ghrelin-immunoreactive cells in the gastric mucosa, and bone mineral density (BMD) in twenty adult women with established RA with 6 months or more of symptoms (mean age of 52.70±11.40 years). Patients with RA presented higher ghrelin-immunoreactive cells density in gastric mucosa (P=0.008) compared with healthy females. There was a positive relationship between femoral neck BMD and gastric ghrelin cell density (P=0.007). However, these same patients presented a negative correlation between plasma ghrelin levels and total femoral BMD (P=0.03). The present results indicate that ghrelin may be involved in bone metabolism of patients with RA. However, the higher density of ghrelin-producing cells in the gastric mucosa of these patients does not seem to induce a corresponding elevation in the plasma levels of this peptide.

  4. Ghrelin plasma levels, gastric ghrelin cell density and bone mineral density in women with rheumatoid arthritis

    Directory of Open Access Journals (Sweden)

    F.A.N. Maksud

    Full Text Available Generalized bone loss can be considered an extra-articular manifestation of rheumatoid arthritis (RA that may lead to the occurrence of fractures, resulting in decreased quality of life and increased healthcare costs. The peptide ghrelin has demonstrated to positively affect osteoblasts in vitro and has anti-inflammatory actions, but the studies that correlate ghrelin plasma levels and RA have contradictory results. We aimed to evaluate the correlation between total ghrelin plasma levels, density of ghrelin-immunoreactive cells in the gastric mucosa, and bone mineral density (BMD in twenty adult women with established RA with 6 months or more of symptoms (mean age of 52.70±11.40 years. Patients with RA presented higher ghrelin-immunoreactive cells density in gastric mucosa (P=0.008 compared with healthy females. There was a positive relationship between femoral neck BMD and gastric ghrelin cell density (P=0.007. However, these same patients presented a negative correlation between plasma ghrelin levels and total femoral BMD (P=0.03. The present results indicate that ghrelin may be involved in bone metabolism of patients with RA. However, the higher density of ghrelin-producing cells in the gastric mucosa of these patients does not seem to induce a corresponding elevation in the plasma levels of this peptide.

  5. An Evaluation of the Plant Density Estimator the Point-Centred Quarter Method (PCQM Using Monte Carlo Simulation.

    Directory of Open Access Journals (Sweden)

    Md Nabiul Islam Khan

    Full Text Available In the Point-Centred Quarter Method (PCQM, the mean distance of the first nearest plants in each quadrant of a number of random sample points is converted to plant density. It is a quick method for plant density estimation. In recent publications the estimator equations of simple PCQM (PCQM1 and higher order ones (PCQM2 and PCQM3, which uses the distance of the second and third nearest plants, respectively show discrepancy. This study attempts to review PCQM estimators in order to find the most accurate equation form. We tested the accuracy of different PCQM equations using Monte Carlo Simulations in simulated (having 'random', 'aggregated' and 'regular' spatial patterns plant populations and empirical ones.PCQM requires at least 50 sample points to ensure a desired level of accuracy. PCQM with a corrected estimator is more accurate than with a previously published estimator. The published PCQM versions (PCQM1, PCQM2 and PCQM3 show significant differences in accuracy of density estimation, i.e. the higher order PCQM provides higher accuracy. However, the corrected PCQM versions show no significant differences among them as tested in various spatial patterns except in plant assemblages with a strong repulsion (plant competition. If N is number of sample points and R is distance, the corrected estimator of PCQM1 is 4(4N - 1/(π ∑ R2 but not 12N/(π ∑ R2, of PCQM2 is 4(8N - 1/(π ∑ R2 but not 28N/(π ∑ R2 and of PCQM3 is 4(12N - 1/(π ∑ R2 but not 44N/(π ∑ R2 as published.If the spatial pattern of a plant association is random, PCQM1 with a corrected equation estimator and over 50 sample points would be sufficient to provide accurate density estimation. PCQM using just the nearest tree in each quadrant is therefore sufficient, which facilitates sampling of trees, particularly in areas with just a few hundred trees per hectare. PCQM3 provides the best density estimations for all types of plant assemblages including the repulsion process

  6. Three-dimensional Monte Carlo simulations of W7-X plasma transport: density control and particle balance in steady-state operations

    International Nuclear Information System (INIS)

    Sharma, D.; Feng, Y.; Sardei, F.; Reiter, D.

    2005-01-01

    This paper presents self-consistent three-dimensional (3D) plasma transport simulations in the boundary of stellarator W7-X obtained with the Monte Carlo code EMC3-EIRENE for three typical island divertor configurations. The chosen 3D grid consists of relatively simple nested finite toroidal surfaces defined on a toroidal field period and covering the whole edge topology, which includes closed surfaces, islands and ergodic regions. Local grid refinements account for the required high resolution in the divertor region. The distribution of plasma density and temperature in the divertor region, as well as the power deposition profiles on the divertor plates, are shown to strongly depend on the island geometry, i.e. on the position and size of the dominant island chain. Configurations with strike-point positions closer to the gap of the divertor chamber generally favour the neutral compression in the divertor chamber and hence the pumping efficiency. The ratio of pumping to recycling fluxes is found to be roughly independent of the separatrix density and is thus a figure of merit for the quality of the configuration and of the divertor system in terms of density control. Lower limits for the achievable separatrix density, which determine the particle exhaust capabilities in stationary conditions, are compared for the three W7-X configurations

  7. Coevolution Based Adaptive Monte Carlo Localization (CEAMCL

    Directory of Open Access Journals (Sweden)

    Luo Ronghua

    2008-11-01

    Full Text Available An adaptive Monte Carlo localization algorithm based on coevolution mechanism of ecological species is proposed. Samples are clustered into species, each of which represents a hypothesis of the robot's pose. Since the coevolution between the species ensures that the multiple distinct hypotheses can be tracked stably, the problem of premature convergence when using MCL in highly symmetric environments can be solved. And the sample size can be adjusted adaptively over time according to the uncertainty of the robot's pose by using the population growth model. In addition, by using the crossover and mutation operators in evolutionary computation, intra-species evolution can drive the samples move towards the regions where the desired posterior density is large. So a small size of samples can represent the desired density well enough to make precise localization. The new algorithm is termed coevolution based adaptive Monte Carlo localization (CEAMCL. Experiments have been carried out to prove the efficiency of the new localization algorithm.

  8. Single particle level density in a finite depth potential well

    International Nuclear Information System (INIS)

    Shlomo, S.; Kolomietz, V.M.; Dejbakhsh, H.

    1997-01-01

    We consider the single particle level density g(ε) of a realistic finite depth potential well, concentrating on the continuum (ε>0) region. We carry out quantum-mechanical calculations of the partial level density g l (ε), associated with a well-defined orbital angular momentum l≤40, using the phase-shift derivative method and the Greens-function method and compare the results with those obtained using the Thomas-Fermi approximation. We also numerically calculate g(ε) as a l sum of g l (ε) up to a certain value of scr(l) max ≤40 and determine the corresponding smooth level densities using the Strutinsky smoothing procedure. We demonstrate, in accordance with Levinson close-quote s theorem, that the partial contribution g l (ε) to the single particle level density from continuum states has positive and negative values. However, g(ε) is nonnegative. We also point out that this is not the case for an energy-dependent potential well. copyright 1997 The American Physical Society

  9. A united event grand canonical Monte Carlo study of partially doped polyaniline

    Energy Technology Data Exchange (ETDEWEB)

    Byshkin, M. S., E-mail: mbyshkin@unisa.it, E-mail: gmilano@unisa.it; Correa, A. [Modeling Lab for Nanostructure and Catalysis, Dipartimento di Chimica e Biologia and NANOMATES, University of Salerno, 84084, via Ponte don Melillo, Fisciano Salerno (Italy); Buonocore, F. [ENEA Casaccia Research Center, Via Anguillarese 301, 00123 Rome (Italy); Di Matteo, A. [STMicroelectronics, Via Remo de Feo, 1 80022 Arzano, Naples (Italy); IMAST Scarl Piazza Bovio 22, 80133 Naples (Italy); Milano, G., E-mail: mbyshkin@unisa.it, E-mail: gmilano@unisa.it [Modeling Lab for Nanostructure and Catalysis, Dipartimento di Chimica e Biologia and NANOMATES, University of Salerno, 84084, via Ponte don Melillo, Fisciano Salerno (Italy); IMAST Scarl Piazza Bovio 22, 80133 Naples (Italy)

    2013-12-28

    A Grand Canonical Monte Carlo scheme, based on united events combining protonation/deprotonation and insertion/deletion of HCl molecules is proposed for the generation of polyaniline structures at intermediate doping levels between 0% (PANI EB) and 100% (PANI ES). A procedure based on this scheme and subsequent structure relaxations using molecular dynamics is described and validated. Using the proposed scheme and the corresponding procedure, atomistic models of amorphous PANI-HCl structures were generated and studied at different doping levels. Density, structure factors, and solubility parameters were calculated. Their values agree well with available experimental data. The interactions of HCl with PANI have been studied and distribution of their energies has been analyzed. The procedure has also been extended to the generation of PANI models including adsorbed water and the effect of inclusion of water molecules on PANI properties has also been modeled and discussed. The protocol described here is general and the proposed United Event Grand Canonical Monte Carlo scheme can be easily extended to similar polymeric materials used in gas sensing and to other systems involving adsorption and chemical reactions steps.

  10. The level density parameters for fermi gas model

    International Nuclear Information System (INIS)

    Zuang Youxiang; Wang Cuilan; Zhou Chunmei; Su Zongdi

    1986-01-01

    Nuclear level densities are crucial ingredient in the statistical models, for instance, in the calculations of the widths, cross sections, emitted particle spectra, etc. for various reaction channels. In this work 667 sets of more reliable and new experimental data are adopted, which include average level spacing D D , radiative capture width Γ γ 0 at neutron binding energy and cumulative level number N 0 at the low excitation energy. They are published during 1973 to 1983. Based on the parameters given by Gilbert-Cameon and Cook the physical quantities mentioned above are calculated. The calculated results have the deviation obviously from experimental values. In order to improve the fitting, the parameters in the G-C formula are adjusted and new set of level density parameters is obsained. The parameters is this work are more suitable to fit new measurements

  11. Monte Carlo and Quasi-Monte Carlo Sampling

    CERN Document Server

    Lemieux, Christiane

    2009-01-01

    Presents essential tools for using quasi-Monte Carlo sampling in practice. This book focuses on issues related to Monte Carlo methods - uniform and non-uniform random number generation, variance reduction techniques. It covers several aspects of quasi-Monte Carlo methods.

  12. Effect of pairing in nuclear level density at low temperatures

    International Nuclear Information System (INIS)

    Rhine Kumar, A.K.; Modi, Swati; Arumugam, P.

    2013-01-01

    The nuclear level density (NLD) has been an interesting topic for researchers, due its importance in many aspects of nuclear physics, nuclear astrophysics, nuclear medicine, and other applied areas. The calculation of NLD helps us to understand the energy distribution of the excited levels of nuclei, entropy, specific heat, reaction cross sections etc. In this work the effect of temperature and pairing on level-density of the nucleus 116 Sn has been studied

  13. A contribution Monte Carlo method

    International Nuclear Information System (INIS)

    Aboughantous, C.H.

    1994-01-01

    A Contribution Monte Carlo method is developed and successfully applied to a sample deep-penetration shielding problem. The random walk is simulated in most of its parts as in conventional Monte Carlo methods. The probability density functions (pdf's) are expressed in terms of spherical harmonics and are continuous functions in direction cosine and azimuthal angle variables as well as in position coordinates; the energy is discretized in the multigroup approximation. The transport pdf is an unusual exponential kernel strongly dependent on the incident and emergent directions and energies and on the position of the collision site. The method produces the same results obtained with the deterministic method with a very small standard deviation, with as little as 1,000 Contribution particles in both analog and nonabsorption biasing modes and with only a few minutes CPU time

  14. Nuclear level density variation with angular momentum induced shape transition

    International Nuclear Information System (INIS)

    Aggarwal, Mamta

    2016-01-01

    Variation of Nuclear level density (NLD) with the excitation energy and angular momentum in particular has been a topic of interest in the recent past and there have been continuous efforts in this direction on the theoretical and experimental fronts but a conclusive trend in the variation of nuclear level density parameter with angular momentum has not been achieved so far. A comprehensive investigation of N=68 isotones around the compound nucleus 119 Sb from neutron rich 112 Ru (Z=44) to neutron deficient 127 Pr (Z= 59) nuclei is presented to understand the angular momentum induced variations in inverse level density parameter and the possible influence of deformation and structural transitions on the variations on NLd

  15. Parity dependence of the nuclear level density at high excitation

    International Nuclear Information System (INIS)

    Rao, B.V.; Agrawal, H.M.

    1995-01-01

    The basic underlying assumption ρ(l+1, J)=ρ(l, J) in the level density function ρ(U, J, π) has been checked on the basis of high quality data available on individual resonance parameters (E 0 , Γ n , J π ) for s- and p-wave neutrons in contrast to the earlier analysis where information about p-wave resonance parameters was meagre. The missing level estimator based on the partial integration over a Porter-Thomas distribution of neutron reduced widths and the Dyson-Mehta Δ 3 statistic for the level spacing have been used to ascertain that the s- and p-wave resonance level spacings D(0) and D(1) are not in error because of spurious and missing levels. The present work does not validate the tacit assumption ρ(l+1, J)=ρ(l, J) and confirms that the level density depends upon parity at high excitation. The possible implications of the parity dependence of the level density on the results of statistical model calculations of nuclear reaction cross sections as well as on pre-compound emission have been emphasized. (orig.)

  16. Testing of the level density segment of the RIPL

    International Nuclear Information System (INIS)

    Capote, Roberto

    2000-01-01

    A comparison between RIPL phenomenological state density parameterizations and microscopical state density (SD) codes was performed for nickel and samarium isotopes. All the codes were shown to be complete. More work is needed on calculation of the collective enhancement of the level densities to improve currently used phenomenological recipes. It was shown that phenomenological closed formulae for particle-hole state density fails to describe microscopical calculation for magic nuclei. For deformed nuclei, like Sm-152, the agreement of Williams closed formulae considering Kalbach pairing correction with microscopical SID calculations was very good. (author)

  17. Properties of 112Cd from the (n,n'γ) reaction: Levels and level densities

    International Nuclear Information System (INIS)

    Garrett, P. E.; Lehmann, H.; Jolie, J.; McGrath, C. A.; Yeh, Minfang; Younes, W.; Yates, S. W.

    2001-01-01

    Levels in 112 Cd have been studied through the (n,n'γ) reaction with monoenergetic neutrons. An extended set of experiments that included excitation functions, γ-ray angular distributions, and γγ coincidence measurements was performed. A total of 375 γ rays were placed in a level scheme comprising 200 levels (of which 238 γ-ray assignments and 58 levels are newly established) up to 4 MeV in excitation. No evidence to support the existence of 47 levels as suggested in previous studies was found, and these have been removed from the level scheme. From the results, a comparison of the level density is made with the constant temperature and back-shifted Fermi gas models. The back-shifted Fermi gas model with the Gilbert-Cameron spin cutoff parameter provided the best overall fit. Without using the neutron resonance information and only fitting the cumulative number of low-lying levels, the level density parameters extracted are a sensitive function of the maximum energy used in the fit

  18. On the study of level density parameters for some deformed light nuclei

    International Nuclear Information System (INIS)

    Sonmezoglu, S.

    2005-01-01

    The nuclear level density, which is the number of energy levels/MeV at an excitation energy Ex , is a characteristic property of every nucleus. Total level densities are among the key quantities in statistical calculations in many fields, such as nuclear physics, astrophysics, spallation s neutrons measurements, and studies of intermediate-energy heavy-ion collisions. The nuclear level density is an important physical quantity both from the fundamental point of view as well as in understanding the particle and gamma ray emission in various reactions. In light and heavy deformed nucleus, the gamma-ray energies drop with decreasing spin in a very regular fashion. The nuclear level density parameters have been usually used in investigation of the nuclear level density. This parameter itself changes with excitation energy depending on both shell effect in the single particle model and different excitation modes in the collective models. In this study, the energy level density parameters of some deformed light nucleus (40 C a, 47 T i, 59 N i, 79 S e, 80 B r) are determined by using energy spectrum of the interest nucleus for different band. In calculation of energy-level density parameters dependent upon excitation energy of nuclei studied, a model was considered which relies on the fact that energy levels of deformed light nuclei, just like those of deformed heavy nuclei, are equidistant and which relies on collective motions of their nucleons. The present calculation results have been compared with the corresponding experimental and theoretical results. The obtained results are in good agreement with the experimental results

  19. Level density of radioactive doubly-magic nucleus 56Ni

    International Nuclear Information System (INIS)

    Santhosh Kumar, S.; Rengaiyan, R.; Victor Babu, A.; Preetha, P.

    2012-01-01

    In this work the single particle energies are obtained by diagonalising the Nilsson Hamiltonian in the cylindrical basis and are generated up to N =11 shells for the isotopes of Ni from A = 48-70, emphasizing the three magic nuclei viz, 48 Ni, 56 Ni and 68 Ni. The statistical quantities like excitation energy, level density parameter and nuclear level density which play the important roles in the nuclear structure and nuclear reactions can be calculated theoretically by means of the Statistical or Partition function method. Hence the statistical model approach is followed to probe the dynamical properties of the nucleus in the microscopic level

  20. Nuclear level densities with pairing and self-consistent ground-state shell effects

    CERN Document Server

    Arnould, M

    1981-01-01

    Nuclear level density calculations are performed using a model of fermions interacting via the pairing force, and a realistic single particle potential. The pairing interaction is treated within the BCS approximation with different pairing strength values. The single particle potentials are derived in the framework of an energy-density formalism which describes self-consistently the ground states of spherical nuclei. These calculations are extended to statistically deformed nuclei, whose estimated level densities include rotational band contributions. The theoretical results are compared with various experimental data. In addition, the level densities for several nuclei far from stability are compared with the predictions of a back-shifted Fermi gas model. Such a comparison emphasizes the possible danger of extrapolating to unknown nuclei classical level density formulae whose parameter values are tailored for known nuclei. (41 refs).

  1. Low-pressure phase diagram of crystalline benzene from quantum Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Azadi, Sam, E-mail: s.azadi@ucl.ac.uk [Departments of Physics and Astronomy, University College London, Thomas Young Center, London Centre for Nanotechnology, London WC1E 6BT (United Kingdom); Cohen, R. E. [Extreme Materials Initiative, Geophysical Laboratory, Carnegie Institution for Science, Washington, DC 20015 (United States); Department of Earth- and Environmental Sciences, Ludwig Maximilians Universität, Munich 80333 (Germany); Department of Physics and Astronomy, University College London, London WC1E 6BT (United Kingdom)

    2016-08-14

    We studied the low-pressure (0–10 GPa) phase diagram of crystalline benzene using quantum Monte Carlo and density functional theory (DFT) methods. We performed diffusion quantum Monte Carlo (DMC) calculations to obtain accurate static phase diagrams as benchmarks for modern van der Waals density functionals. Using density functional perturbation theory, we computed the phonon contributions to the free energies. Our DFT enthalpy-pressure phase diagrams indicate that the Pbca and P2{sub 1}/c structures are the most stable phases within the studied pressure range. The DMC Gibbs free-energy calculations predict that the room temperature Pbca to P2{sub 1}/c phase transition occurs at 2.1(1) GPa. This prediction is consistent with available experimental results at room temperature. Our DMC calculations give 50.6 ± 0.5 kJ/mol for crystalline benzene lattice energy.

  2. Continuum Level Density in Complex Scaling Method

    International Nuclear Information System (INIS)

    Suzuki, R.; Myo, T.; Kato, K.

    2005-01-01

    A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique

  3. Angular momentum dependence of the nuclear level density parameter

    International Nuclear Information System (INIS)

    Aggarwal, Mamta; Kailas, S.

    2010-01-01

    Dependence of nuclear level density parameter on the angular momentum and temperature is investigated in a theoretical framework using the statistical theory of hot rotating nuclei. The structural effects are incorporated by including shell correction, shape, and deformation. The nuclei around Z≅50 with an excitation energy range of 30 to 40 MeV are considered. The calculations are in good agreement with the experimentally deduced inverse level density parameter values especially for 109 In, 113 Sb, 122 Te, 123 I, and 127 Cs nuclei.

  4. Systematics of nuclear mass and level density formulas

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Hisashi [Fuji Electric Co. Ltd., Kawasaki, Kanagawa (Japan)

    1998-03-01

    The phenomenological models of the nuclear mass and level density are close related to each other, the nuclear ground and excited state properties are described by using the parameter systematics on the mass and level density formulas. The main aim of this work is to provide in an analytical framework the improved energy dependent shell, pairing and deformation corrections generalized to the collective enhancement factors, which offer a systematic prescription over a great number of nuclear reaction cross sections. The new formulas are shown to be in close agreement with not only the empirical nuclear mass data but the measured slow neutron resonance spacings, and experimental systematics observed in the excitation energy dependent properties. (author)

  5. TU-AB-BRC-04: Commissioning of a New MLC Model for the GEPTS Monte Carlo System: A Model Based On the Leaf and Interleaf Effective Density

    Energy Technology Data Exchange (ETDEWEB)

    Chibani, O; Tahanout, F; Ma, C [Fox Chase Cancer Center, Philadelphia, PA (United States)

    2016-06-15

    Purpose: To commission a new MLC model for the GEPTS Monte Carlo system. The model is based on the concept of leaves and interleaves effective densities Methods: GEPTS is a Monte Carlo system to be used for external beam planning verification. GEPTS incorporates detailed photon and electron transport algorithms (Med.Phys. 29, 2002, 835). A new GEPTS model for the Varian Millennium MLC is presented. The model accounts for: 1) thick (1 cm) and thin (0.5 cm) leaves, 2) tongue-and-groove design, 3) High-Transmission (HT) and Low-Transmission (LT) interleaves, and 4) rounded leaf end. Leaf (and interleaf) height is set equal to 6 cm. Instead of modeling air gaps, screw holes, and complex leaf heads, “effective densities” are assigned to: 1) thin leaves, 2) thick leaves, 3) HT-, and 4) LT-interleaves. Results: The new MLC model is used to calculate dose profiles for Closed-MLC and Tongue-and-Groove fields at 5 cm depth for 6, 10 and 15 MV Varian beams. Calculations are compared with 1) Pin-point ionization chamber transmission ratios and 2) EBT3 Radiochromic films. Pinpoint readings were acquired beneath thick and thin leaves, and HT and LT interleaves. The best fit of measured dose profiles was obtained for the following parameters: Thick-leaf density = 16.1 g/cc, Thin-leaf density = 17.2 g/cc; HT Interleaf density = 12.4 g/cc, LT Interleaf density = 14.3 g/cc; Interleaf thickness = 1.1 mm. Attached figures show comparison of calculated and measured transmission ratios for the 3 energies. Note this is the only study where transmission profiles are compared with measurements for 3 different energies. Conclusion: The new MLC model reproduces transmission measurements within 0.1%. The next step is to implement the MLC model for real plans and quantify the improvement in dose calculation accuracy gained using this model for IMRT plans with high modulation factors.

  6. Dose Calculation Accuracy of the Monte Carlo Algorithm for CyberKnife Compared with Other Commercially Available Dose Calculation Algorithms

    International Nuclear Information System (INIS)

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  7. Evaluation of an analytic linear Boltzmann transport equation solver for high-density inhomogeneities

    Energy Technology Data Exchange (ETDEWEB)

    Lloyd, S. A. M.; Ansbacher, W. [Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 (Canada); Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 (Canada) and Department of Medical Physics, British Columbia Cancer Agency-Vancouver Island Centre, Victoria, British Columbia V8R 6V5 (Canada)

    2013-01-15

    Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are used to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements

  8. A Monte Carlo-adjusted goodness-of-fit test for parametric models describing spatial point patterns

    KAUST Repository

    Dao, Ngocanh

    2014-04-03

    Assessing the goodness-of-fit (GOF) for intricate parametric spatial point process models is important for many application fields. When the probability density of the statistic of the GOF test is intractable, a commonly used procedure is the Monte Carlo GOF test. Additionally, if the data comprise a single dataset, a popular version of the test plugs a parameter estimate in the hypothesized parametric model to generate data for theMonte Carlo GOF test. In this case, the test is invalid because the resulting empirical level does not reach the nominal level. In this article, we propose a method consisting of nested Monte Carlo simulations which has the following advantages: the bias of the resulting empirical level of the test is eliminated, hence the empirical levels can always reach the nominal level, and information about inhomogeneity of the data can be provided.We theoretically justify our testing procedure using Taylor expansions and demonstrate that it is correctly sized through various simulation studies. In our first data application, we discover, in agreement with Illian et al., that Phlebocarya filifolia plants near Perth, Australia, can follow a homogeneous Poisson clustered process that provides insight into the propagation mechanism of these plants. In our second data application, we find, in contrast to Diggle, that a pairwise interaction model provides a good fit to the micro-anatomy data of amacrine cells designed for analyzing the developmental growth of immature retina cells in rabbits. This article has supplementary material online. © 2013 American Statistical Association, Institute of Mathematical Statistics, and Interface Foundation of North America.

  9. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay

    2017-04-24

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  10. Advanced Multilevel Monte Carlo Methods

    KAUST Repository

    Jasra, Ajay; Law, Kody; Suciu, Carina

    2017-01-01

    This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.

  11. LCG Monte-Carlo Data Base

    CERN Document Server

    Bartalini, P.; Kryukov, A.; Selyuzhenkov, Ilya V.; Sherstnev, A.; Vologdin, A.

    2004-01-01

    We present the Monte-Carlo events Data Base (MCDB) project and its development plans. MCDB facilitates communication between authors of Monte-Carlo generators and experimental users. It also provides a convenient book-keeping and an easy access to generator level samples. The first release of MCDB is now operational for the CMS collaboration. In this paper we review the main ideas behind MCDB and discuss future plans to develop this Data Base further within the CERN LCG framework.

  12. Two proposed convergence criteria for Monte Carlo solutions

    International Nuclear Information System (INIS)

    Forster, R.A.; Pederson, S.P.; Booth, T.E.

    1992-01-01

    The central limit theorem (CLT) can be applied to a Monte Carlo solution if two requirements are satisfied: (1) The random variable has a finite mean and a finite variance; and (2) the number N of independent observations grows large. When these two conditions are satisfied, a confidence interval (CI) based on the normal distribution with a specified coverage probability can be formed. The first requirement is generally satisfied by the knowledge of the Monte Carlo tally being used. The Monte Carlo practitioner has a limited number of marginal methods to assess the fulfillment of the second requirement, such as statistical error reduction proportional to 1/√N with error magnitude guidelines. Two proposed methods are discussed in this paper to assist in deciding if N is large enough: estimating the relative variance of the variance (VOV) and examining the empirical history score probability density function (pdf)

  13. A New Approach to Monte Carlo Simulations in Statistical Physics

    Science.gov (United States)

    Landau, David P.

    2002-08-01

    Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).

  14. On investigating the structure of hadrons: Lattice Monte Carlo measurements of colour magnetic and electric fields and the topological charge density inside glueballs

    International Nuclear Information System (INIS)

    Ishikawa, K.; Schierholz, G.; Teper, M.; Schneider, H.

    1982-12-01

    We present some techniques for elucidating hadronic structure via lattice Monte Carlo calculations. Applying these techniques, we measure the fluctuations of colour magnetic and electric fields as well as the topological charge density inside and outside the lowest lying 0 + and 2 + glueballs in the SU(2) non-abelian lattice gauge theory. This gives us a detailed picture of the glueball structure. We also obtain, as a by-product, a reliable estimate of the gluon condensate sup(αs)/sub(π) and an estimate of the O - glueball mass which agrees with our previous estimates. (orig.)

  15. Monte Carlo Numerical Models for Nuclear Logging Applications

    Directory of Open Access Journals (Sweden)

    Fusheng Li

    2012-06-01

    Full Text Available Nuclear logging is one of most important logging services provided by many oil service companies. The main parameters of interest are formation porosity, bulk density, and natural radiation. Other services are also provided from using complex nuclear logging tools, such as formation lithology/mineralogy, etc. Some parameters can be measured by using neutron logging tools and some can only be measured by using a gamma ray tool. To understand the response of nuclear logging tools, the neutron transport/diffusion theory and photon diffusion theory are needed. Unfortunately, for most cases there are no analytical answers if complex tool geometry is involved. For many years, Monte Carlo numerical models have been used by nuclear scientists in the well logging industry to address these challenges. The models have been widely employed in the optimization of nuclear logging tool design, and the development of interpretation methods for nuclear logs. They have also been used to predict the response of nuclear logging systems for forward simulation problems. In this case, the system parameters including geometry, materials and nuclear sources, etc., are pre-defined and the transportation and interactions of nuclear particles (such as neutrons, photons and/or electrons in the regions of interest are simulated according to detailed nuclear physics theory and their nuclear cross-section data (probability of interacting. Then the deposited energies of particles entering the detectors are recorded and tallied and the tool responses to such a scenario are generated. A general-purpose code named Monte Carlo N– Particle (MCNP has been the industry-standard for some time. In this paper, we briefly introduce the fundamental principles of Monte Carlo numerical modeling and review the physics of MCNP. Some of the latest developments of Monte Carlo Models are also reviewed. A variety of examples are presented to illustrate the uses of Monte Carlo numerical models

  16. Variational Monte Carlo calculations of few-body nuclei

    International Nuclear Information System (INIS)

    Wiringa, R.B.

    1986-01-01

    The variational Monte Carlo method is described. Results for the binding energies, density distributions, momentum distributions, and static longitudinal structure functions of the 3 H, 3 He, and 4 He ground states, and for the energies of the low-lying scattering states in 4 He are presented. 25 refs., 3 figs

  17. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    Science.gov (United States)

    De Saint Jean, Cyrille; Archier, Pascal; Privas, Edwin; Noguere, Gilles

    2017-09-01

    As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections) with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior) ˜ pdf(prior) × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→?) knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→?. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS) or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization) and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript) in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to provide the

  18. On the use of Bayesian Monte-Carlo in evaluation of nuclear data

    Directory of Open Access Journals (Sweden)

    De Saint Jean Cyrille

    2017-01-01

    Full Text Available As model parameters, necessary ingredients of theoretical models, are not always predicted by theory, a formal mathematical framework associated to the evaluation work is needed to obtain the best set of parameters (resonance parameters, optical models, fission barrier, average width, multigroup cross sections with Bayesian statistical inference by comparing theory to experiment. The formal rule related to this methodology is to estimate the posterior density probability function of a set of parameters by solving an equation of the following type: pdf(posterior ∼ pdf(prior × a likelihood function. A fitting procedure can be seen as an estimation of the posterior density probability of a set of parameters (referred as x→ knowing a prior information on these parameters and a likelihood which gives the probability density function of observing a data set knowing x→. To solve this problem, two major paths could be taken: add approximations and hypothesis and obtain an equation to be solved numerically (minimum of a cost function or Generalized least Square method, referred as GLS or use Monte-Carlo sampling of all prior distributions and estimate the final posterior distribution. Monte Carlo methods are natural solution for Bayesian inference problems. They avoid approximations (existing in traditional adjustment procedure based on chi-square minimization and propose alternative in the choice of probability density distribution for priors and likelihoods. This paper will propose the use of what we are calling Bayesian Monte Carlo (referred as BMC in the rest of the manuscript in the whole energy range from thermal, resonance and continuum range for all nuclear reaction models at these energies. Algorithms will be presented based on Monte-Carlo sampling and Markov chain. The objectives of BMC are to propose a reference calculation for validating the GLS calculations and approximations, to test probability density distributions effects and to

  19. Monte Carlo Simulation of Influence of Input Parameters Uncertainty on Output Data

    International Nuclear Information System (INIS)

    Sobek, Lukas

    2010-01-01

    Input parameters of a complex system in the probabilistic simulation are treated by means of probability density function (PDF). The result of the simulation have also probabilistic character. Monte Carlo simulation is widely used to obtain predictions concerning the probability of the risk. The Monte Carlo method was performed to calculate histograms of PDF for release rate given by uncertainty in distribution coefficient of radionuclides 135 Cs and 235 U.

  20. Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides; Simulacion Monte Carlo: herramienta para la calibracion en determinaciones analiticas de radionucleidos

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez, E-mail: cphr@cphr.edu.cu [Centro de Proteccion e Higiene de las Radiaciones (CPHR), La Habana (Cuba)

    2013-07-01

    This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program.

  1. Density-density functionals and effective potentials in many-body electronic structure calculations

    International Nuclear Information System (INIS)

    Reboredo, Fernando A.; Kent, Paul R.

    2008-01-01

    We demonstrate the existence of different density-density functionals designed to retain selected properties of the many-body ground state in a non-interacting solution starting from the standard density functional theory ground state. We focus on diffusion quantum Monte Carlo applications that require trial wave functions with optimal Fermion nodes. The theory is extensible and can be used to understand current practices in several electronic structure methods within a generalized density functional framework. The theory justifies and stimulates the search of optimal empirical density functionals and effective potentials for accurate calculations of the properties of real materials, but also cautions on the limits of their applicability. The concepts are tested and validated with a near-analytic model.

  2. The Influence of Decreased Levels of High Density Lipoprotein ...

    African Journals Online (AJOL)

    Background: Changes in lipoproteins levels in sickle cell disease (SCD) patients are well.known, but the physiological ramifications of the low levels observed have not been entirely resolved. Aim: The aim of this study is to evaluate the impact of decreased levels of high density lipoprotein cholesterol (HDL.c) on ...

  3. Efficiency of rejection-free methods for dynamic Monte Carlo studies of off-lattice interacting particles

    KAUST Repository

    Guerra, Marta L.; Novotny, M. A.; Watanabe, Hiroshi; Ito, Nobuyasu

    2009-01-01

    We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.

  4. Efficiency of rejection-free methods for dynamic Monte Carlo studies of off-lattice interacting particles

    KAUST Repository

    Guerra, Marta L.

    2009-02-23

    We calculate the efficiency of a rejection-free dynamic Monte Carlo method for d -dimensional off-lattice homogeneous particles interacting through a repulsive power-law potential r-p. Theoretically we find the algorithmic efficiency in the limit of low temperatures and/or high densities is asymptotically proportional to ρ (p+2) /2 T-d/2 with the particle density ρ and the temperature T. Dynamic Monte Carlo simulations are performed in one-, two-, and three-dimensional systems with different powers p, and the results agree with the theoretical predictions. © 2009 The American Physical Society.

  5. A flexible coupling scheme for Monte Carlo and thermal-hydraulics codes

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard, E-mail: J.E.Hoogenboom@tudelft.nl [Delft University of Technology (Netherlands); Ivanov, Aleksandar; Sanchez, Victor, E-mail: Aleksandar.Ivanov@kit.edu, E-mail: Victor.Sanchez@kit.edu [Karlsruhe Institute of Technology, Institute of Neutron Physics and Reactor Technology, Eggenstein-Leopoldshafen (Germany); Diop, Cheikh, E-mail: Cheikh.Diop@cea.fr [CEA/DEN/DANS/DM2S/SERMA, Commissariat a l' Energie Atomique, Gif-sur-Yvette (France)

    2011-07-01

    A coupling scheme between a Monte Carlo code and a thermal-hydraulics code is being developed within the European NURISP project for comprehensive and validated reactor analysis. The scheme is flexible as it allows different Monte Carlo codes and different thermal-hydraulics codes to be used. At present the MCNP and TRIPOLI4 Monte Carlo codes can be used and the FLICA4 and SubChanFlow thermal-hydraulics codes. For all these codes only an original executable is necessary. A Python script drives the iterations between Monte Carlo and thermal-hydraulics calculations. It also calls a conversion program to merge a master input file for the Monte Carlo code with the appropriate temperature and coolant density data from the thermal-hydraulics calculation. Likewise it calls another conversion program to merge a master input file for the thermal-hydraulics code with the power distribution data from the Monte Carlo calculation. Special attention is given to the neutron cross section data for the various required temperatures in the Monte Carlo calculation. Results are shown for an infinite lattice of PWR fuel pin cells and a 3 x 3 fuel BWR pin cell cluster. Various possibilities for further improvement and optimization of the coupling system are discussed. (author)

  6. A flexible coupling scheme for Monte Carlo and thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard; Ivanov, Aleksandar; Sanchez, Victor; Diop, Cheikh

    2011-01-01

    A coupling scheme between a Monte Carlo code and a thermal-hydraulics code is being developed within the European NURISP project for comprehensive and validated reactor analysis. The scheme is flexible as it allows different Monte Carlo codes and different thermal-hydraulics codes to be used. At present the MCNP and TRIPOLI4 Monte Carlo codes can be used and the FLICA4 and SubChanFlow thermal-hydraulics codes. For all these codes only an original executable is necessary. A Python script drives the iterations between Monte Carlo and thermal-hydraulics calculations. It also calls a conversion program to merge a master input file for the Monte Carlo code with the appropriate temperature and coolant density data from the thermal-hydraulics calculation. Likewise it calls another conversion program to merge a master input file for the thermal-hydraulics code with the power distribution data from the Monte Carlo calculation. Special attention is given to the neutron cross section data for the various required temperatures in the Monte Carlo calculation. Results are shown for an infinite lattice of PWR fuel pin cells and a 3 x 3 fuel BWR pin cell cluster. Various possibilities for further improvement and optimization of the coupling system are discussed. (author)

  7. Monte Carlo simulation for ion-molecule collisions at intermediate velocity

    International Nuclear Information System (INIS)

    Kadhane, U R; Mishra, P M; Rajput, J; Safvan, C P; Vig, S

    2015-01-01

    Electronic energy loss distribution estimation is done under local density distribution using Monte Carlo simulations. These results are used to compare the experimental results of proton-polycyclic aromatic hydrocarbons (PAHs) and proton-nucleobase interactions at intermediate velocity collisions. (paper)

  8. Variation of level density parameter with angular momentum in 119Sb

    International Nuclear Information System (INIS)

    Aggarwal, Mamta; Kailas, S.

    2015-01-01

    Nuclear level density (NLD), a basic ingredient of Statistical Model has been a subject of interest for various decades as it plays an important role in the understanding of a wide variety of Nuclear reactions. There have been various efforts towards the precise determination of NLD and study its dependence on excitation energy and angular momentum as it is crucial in the determination of cross-sections. Here we report our results of theoretical calculations in a microscopic framework to understand the experimental results on inverse level density parameter (k) extracted for different angular momentum regions for 119 Sb corresponding to different γ-ray multiplicities by comparing the experimental neutron energy spectra with statistical model predictions where an increase in the level density with the increasing angular momentum is predicted. NLD and neutron emission spectra dependence on temperature and spin has been studied in our earlier works where the influence of structural transitions due to angular momentum and temperature on level density of states and neutron emission probability was shown

  9. PyMercury: Interactive Python for the Mercury Monte Carlo Particle Transport Code

    International Nuclear Information System (INIS)

    Iandola, F.N.; O'Brien, M.J.; Procassini, R.J.

    2010-01-01

    Monte Carlo particle transport applications are often written in low-level languages (C/C++) for optimal performance on clusters and supercomputers. However, this development approach often sacrifices straightforward usability and testing in the interest of fast application performance. To improve usability, some high-performance computing applications employ mixed-language programming with high-level and low-level languages. In this study, we consider the benefits of incorporating an interactive Python interface into a Monte Carlo application. With PyMercury, a new Python extension to the Mercury general-purpose Monte Carlo particle transport code, we improve application usability without diminishing performance. In two case studies, we illustrate how PyMercury improves usability and simplifies testing and validation in a Monte Carlo application. In short, PyMercury demonstrates the value of interactive Python for Monte Carlo particle transport applications. In the future, we expect interactive Python to play an increasingly significant role in Monte Carlo usage and testing.

  10. Calculations on the vibrational level density in highly excited formaldehyde

    International Nuclear Information System (INIS)

    Rashev, Svetoslav; Moule, David C.

    2003-01-01

    The object of the present work is to develop a model that provides realistic estimates of the vibrational level density in polyatomic molecules in a given electronic state, at very high (chemically relevant) vibrational excitation energies. For S 0 formaldehyde (D 2 CO), acetylene, and a number of triatomics, the estimates using conventional spectroscopic formulas have yielded densities at the dissociation threshold, very much lower than the experimentally measured values. In the present work we have derived a general formula for the vibrational energy levels of a polyatomic molecule, which is a generalization of the conventional Dunham spectroscopic expansion. Calculations were performed on the vibrational level density in S 0 D 2 CO, H 2 C 2 , and NO 2 at excitation energies in the vicinity of the dissociation limit, using the newly derived formula. The results from the calculations are in reasonable agreement with the experimentally measured data

  11. Variational Monte Carlo calculations of few-body nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Wiringa, R.B.

    1986-01-01

    The variational Monte Carlo method is described. Results for the binding energies, density distributions, momentum distributions, and static longitudinal structure functions of the /sup 3/H, /sup 3/He, and /sup 4/He ground states, and for the energies of the low-lying scattering states in /sup 4/He are presented. 25 refs., 3 figs.

  12. Monte-Carlo simulation of crystallographical pore growth in III-V-semiconductors

    International Nuclear Information System (INIS)

    Leisner, Malte; Carstensen, Juergen; Foell, Helmut

    2011-01-01

    The growth of crystallographical pores in III-V-semiconductors can be understood in the framework of a simple model, which is based on the assumption that the branching of pores is proportional to the current density at the pore tips. The stochastic nature of this model allows its implementation into a three-dimensional Monte-Carlo-simulation of pore growth. The simulation is able to reproduce the experimentally observed crysto pore structures in III-V-semiconductors in full quantitative detail. The different branching probabilities for different semiconductors, as well as doping levels, can be deduced from the specific passivation behavior of the semiconductor-electrolyte-interface at the pore tips.

  13. Subtle Monte Carlo Updates in Dense Molecular Systems

    DEFF Research Database (Denmark)

    Bottaro, Sandro; Boomsma, Wouter; Johansson, Kristoffer E.

    2012-01-01

    Although Markov chain Monte Carlo (MC) simulation is a potentially powerful approach for exploring conformational space, it has been unable to compete with molecular dynamics (MD) in the analysis of high density structural states, such as the native state of globular proteins. Here, we introduce...... as correlations in a multivariate Gaussian distribution. We demonstrate that our method reproduces structural variation in proteins with greater efficiency than current state-of-the-art Monte Carlo methods and has real-time simulation performance on par with molecular dynamics simulations. The presented results...... suggest our method as a valuable tool in the study of molecules in atomic detail, offering a potential alternative to molecular dynamics for probing long time-scale conformational transitions....

  14. Current and future applications of Monte Carlo

    International Nuclear Information System (INIS)

    Zaidi, H.

    2003-01-01

    Full text: The use of radionuclides in medicine has a long history and encompasses a large area of applications including diagnosis and radiation treatment of cancer patients using either external or radionuclide radiotherapy. The 'Monte Carlo method'describes a very broad area of science, in which many processes, physical systems, and phenomena are simulated by statistical methods employing random numbers. The general idea of Monte Carlo analysis is to create a model, which is as similar as possible to the real physical system of interest, and to create interactions within that system based on known probabilities of occurrence, with random sampling of the probability density functions (pdfs). As the number of individual events (called 'histories') is increased, the quality of the reported average behavior of the system improves, meaning that the statistical uncertainty decreases. The use of the Monte Carlo method to simulate radiation transport has become the most accurate means of predicting absorbed dose distributions and other quantities of interest in the radiation treatment of cancer patients using either external or radionuclide radiotherapy. The same trend has occurred for the estimation of the absorbed dose in diagnostic procedures using radionuclides as well as the assessment of image quality and quantitative accuracy of radionuclide imaging. As a consequence of this generalized use, many questions are being raised primarily about the need and potential of Monte Carlo techniques, but also about how accurate it really is, what would it take to apply it clinically and make it available widely to the nuclear medicine community at large. Many of these questions will be answered when Monte Carlo techniques are implemented and used for more routine calculations and for in-depth investigations. In this paper, the conceptual role of the Monte Carlo method is briefly introduced and followed by a survey of its different applications in diagnostic and therapeutic

  15. Monte Carlo methods and models in finance and insurance

    CERN Document Server

    Korn, Ralf; Kroisandt, Gerald

    2010-01-01

    Offering a unique balance between applications and calculations, Monte Carlo Methods and Models in Finance and Insurance incorporates the application background of finance and insurance with the theory and applications of Monte Carlo methods. It presents recent methods and algorithms, including the multilevel Monte Carlo method, the statistical Romberg method, and the Heath-Platen estimator, as well as recent financial and actuarial models, such as the Cheyette and dynamic mortality models. The authors separately discuss Monte Carlo techniques, stochastic process basics, and the theoretical background and intuition behind financial and actuarial mathematics, before bringing the topics together to apply the Monte Carlo methods to areas of finance and insurance. This allows for the easy identification of standard Monte Carlo tools and for a detailed focus on the main principles of financial and insurance mathematics. The book describes high-level Monte Carlo methods for standard simulation and the simulation of...

  16. Multi-level Monte Carlo Methods for Efficient Simulation of Coulomb Collisions

    Science.gov (United States)

    Ricketson, Lee

    2013-10-01

    We discuss the use of multi-level Monte Carlo (MLMC) schemes--originally introduced by Giles for financial applications--for the efficient simulation of Coulomb collisions in the Fokker-Planck limit. The scheme is based on a Langevin treatment of collisions, and reduces the computational cost of achieving a RMS error scaling as ɛ from O (ɛ-3) --for standard Langevin methods and binary collision algorithms--to the theoretically optimal scaling O (ɛ-2) for the Milstein discretization, and to O (ɛ-2 (logɛ)2) with the simpler Euler-Maruyama discretization. In practice, this speeds up simulation by factors up to 100. We summarize standard MLMC schemes, describe some tricks for achieving the optimal scaling, present results from a test problem, and discuss the method's range of applicability. This work was performed under the auspices of the U.S. DOE by the University of California, Los Angeles, under grant DE-FG02-05ER25710, and by LLNL under contract DE-AC52-07NA27344.

  17. Accuracy and borehole influences in pulsed neutron gamma density logging while drilling

    Energy Technology Data Exchange (ETDEWEB)

    Yu Huawei [College of Geo-Resources and Information, China University of Petroleum, Qingdao, Shandong 266555 (China); Center for Engineering Applications of Radioisotopes (CEAR), Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Sun Jianmeng [College of Geo-Resources and Information, China University of Petroleum, Qingdao, Shandong 266555 (China); Wang Jiaxin [Center for Engineering Applications of Radioisotopes (CEAR), Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States); Gardner, Robin P., E-mail: gardner@ncsu.edu [Center for Engineering Applications of Radioisotopes (CEAR), Department of Nuclear Engineering, North Carolina State University, Raleigh, NC 27695 (United States)

    2011-09-15

    A new pulsed neutron gamma density (NGD) logging has been developed to replace radioactive chemical sources in oil logging tools. The present paper describes studies of near and far density measurement accuracy of NGD logging at two spacings and the borehole influences using Monte-Carlo simulation. The results show that the accuracy of near density is not as good as far density. It is difficult to correct this for borehole effects by using conventional methods because both near and far density measurement is significantly sensitive to standoffs and mud properties. - Highlights: > Monte Carlo evaluation of pulsed neutron gamma-ray density tools. > Results indicate sensitivity of the tool to standoff and mudcake properties. > Accuracy of far spaced detector is better than near spaced.

  18. Accuracy and borehole influences in pulsed neutron gamma density logging while drilling

    International Nuclear Information System (INIS)

    Yu Huawei; Sun Jianmeng; Wang Jiaxin; Gardner, Robin P.

    2011-01-01

    A new pulsed neutron gamma density (NGD) logging has been developed to replace radioactive chemical sources in oil logging tools. The present paper describes studies of near and far density measurement accuracy of NGD logging at two spacings and the borehole influences using Monte-Carlo simulation. The results show that the accuracy of near density is not as good as far density. It is difficult to correct this for borehole effects by using conventional methods because both near and far density measurement is significantly sensitive to standoffs and mud properties. - Highlights: → Monte Carlo evaluation of pulsed neutron gamma-ray density tools. → Results indicate sensitivity of the tool to standoff and mudcake properties. → Accuracy of far spaced detector is better than near spaced.

  19. Monte Carlo simulation for the estimation of iron in human whole ...

    Indian Academy of Sciences (India)

    2017-02-10

    Feb 10, 2017 ... Monte Carlo N-particle (MCNP) code has been used to simulate the transport of gamma photon rays ... experimental data, and better than the theoretical XCOM values. ... tions in the materials, according to probability density.

  20. Uncertainty Propagation Analysis for the Monte Carlo Time-Dependent Simulations

    International Nuclear Information System (INIS)

    Shaukata, Nadeem; Shim, Hyung Jin

    2015-01-01

    In this paper, a conventional method to control the neutron population for super-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. A time-dependent simulation mode has also been implemented in the development version of SERPENT 2 Monte Carlo code. In this mode, sequential population control mechanism has been proposed for modeling of prompt super-critical systems. A Monte Carlo method has been properly used in TART code for dynamic criticality calculations. For super-critical systems, the neutron population is allowed to grow over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, conventional time-dependent Monte Carlo (TDMC) algorithm is implemented. There is an exponential growth of neutron population in estimation of neutron density tally for super-critical systems and the number of neutrons being tracked exceed the memory of the computer. In order to control this exponential growth at the end of each time boundary, a conventional time cut-off controlling population strategy is included in TDMC. A scale factor is introduced to tally the desired neutron density at the end of each time boundary. The main purpose of this paper is the quantification of uncertainty propagation in neutron densities at the end of each time boundary for super-critical systems. This uncertainty is caused by the uncertainty resulting from the introduction of scale factor. The effectiveness of TDMC is examined for one-group infinite homogeneous problem (the rod model) and two-group infinite homogeneous problem. The desired neutron density is tallied by the introduction of

  1. Uncertainty Propagation Analysis for the Monte Carlo Time-Dependent Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Shaukata, Nadeem; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of)

    2015-10-15

    In this paper, a conventional method to control the neutron population for super-critical systems is implemented. Instead of considering the cycles, the simulation is divided in time intervals. At the end of each time interval, neutron population control is applied on the banked neutrons. Randomly selected neutrons are discarded, until the size of neutron population matches the initial neutron histories at the beginning of time simulation. A time-dependent simulation mode has also been implemented in the development version of SERPENT 2 Monte Carlo code. In this mode, sequential population control mechanism has been proposed for modeling of prompt super-critical systems. A Monte Carlo method has been properly used in TART code for dynamic criticality calculations. For super-critical systems, the neutron population is allowed to grow over a period of time. The neutron population is uniformly combed to return it to the neutron population started with at the beginning of time boundary. In this study, conventional time-dependent Monte Carlo (TDMC) algorithm is implemented. There is an exponential growth of neutron population in estimation of neutron density tally for super-critical systems and the number of neutrons being tracked exceed the memory of the computer. In order to control this exponential growth at the end of each time boundary, a conventional time cut-off controlling population strategy is included in TDMC. A scale factor is introduced to tally the desired neutron density at the end of each time boundary. The main purpose of this paper is the quantification of uncertainty propagation in neutron densities at the end of each time boundary for super-critical systems. This uncertainty is caused by the uncertainty resulting from the introduction of scale factor. The effectiveness of TDMC is examined for one-group infinite homogeneous problem (the rod model) and two-group infinite homogeneous problem. The desired neutron density is tallied by the introduction of

  2. Density functional theory for polymeric systems in 2D

    International Nuclear Information System (INIS)

    Słyk, Edyta; Bryk, Paweł; Roth, Roland

    2016-01-01

    We propose density functional theory for polymeric fluids in two dimensions. The approach is based on Wertheim’s first order thermodynamic perturbation theory (TPT) and closely follows density functional theory for polymers proposed by Yu and Wu (2002 J. Chem. Phys . 117 2368). As a simple application we evaluate the density profiles of tangent hard-disk polymers at hard walls. The theoretical predictions are compared against the results of the Monte Carlo simulations. We find that for short chain lengths the theoretical density profiles are in an excellent agreement with the Monte Carlo data. The agreement is less satisfactory for longer chains. The performance of the theory can be improved by recasting the approach using the self-consistent field theory formalism. When the self-avoiding chain statistics is used, the theory yields a marked improvement in the low density limit. Further improvements for long chains could be reached by going beyond the first order of TPT. (paper)

  3. Topological excitations and Monte-Carlo simulation of the Abelian-Higgs model

    International Nuclear Information System (INIS)

    Ranft, J.

    1981-01-01

    The phase structure and topological excitations, in particular the magnetic monopole current density, are investigated in a Monte-Carlo simulation of the lattice version of the four-dimensional Abelian-Higgs model. The monopole current density is found to be large in the confinement phase and rapidly decreasing in the Coulomb and Higgs phases. This result supports the view that confinement is neglected with the condensation of monopole-antimonopole pairs

  4. A new method to assess the statistical convergence of monte carlo solutions

    International Nuclear Information System (INIS)

    Forster, R.A.

    1991-01-01

    Accurate Monte Carlo confidence intervals (CIs), which are formed with an estimated mean and an estimated standard deviation, can only be created when the number of particle histories N becomes large enough so that the central limit theorem can be applied. The Monte Carlo user has a limited number of marginal methods to assess the fulfillment of this condition, such as statistical error reduction proportional to 1/√N with error magnitude guidelines and third and fourth moment estimators. A new method is presented here to assess the statistical convergence of Monte Carlo solutions by analyzing the shape of the empirical probability density function (PDF) of history scores. Related work in this area includes the derivation of analytic score distributions for a two-state Monte Carlo problem. Score distribution histograms have been generated to determine when a small number of histories accounts for a large fraction of the result. This summary describes initial studies of empirical Monte Carlo history score PDFs created from score histograms of particle transport simulations. 7 refs., 1 fig

  5. Collectivity in heavy nuclei in the shell model Monte Carlo approach

    International Nuclear Information System (INIS)

    Özen, C.; Alhassid, Y.; Nakada, H.

    2014-01-01

    The microscopic description of collectivity in heavy nuclei in the framework of the configuration-interaction shell model has been a major challenge. The size of the model space required for the description of heavy nuclei prohibits the use of conventional diagonalization methods. We have overcome this difficulty by using the shell model Monte Carlo (SMMC) method, which can treat model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We identify a thermal observable that can distinguish between vibrational and rotational collectivity and use it to describe the crossover from vibrational to rotational collectivity in families of even-even rare-earth isotopes. We calculate the state densities in these nuclei and find them to be in close agreement with experimental data. We also calculate the collective enhancement factors of the corresponding level densities and find that their decay with excitation energy is correlated with the pairing and shape phase transitions. (author)

  6. State and level densities for 23<=A<=40

    International Nuclear Information System (INIS)

    Beckerman, M.

    1975-01-01

    State and level density parameters are deduced for nuclei in the mass range 23<=A<=40 by combining low energy experimental data with high energy numerical calculations. Low energy experimental information is obtained from direct level counting, s and p-wave neutron resonance measurements, charged particle resonance measurements and stripping and pickup reaction data. Numerical calculations are performed for excitation energies of from 45 to 50 MeV using realistic single particle energies deduced from experimental data. (author)

  7. Shell model Monte Carlo investigation of rare earth nuclei

    International Nuclear Information System (INIS)

    White, J. A.; Koonin, S. E.; Dean, D. J.

    2000-01-01

    We utilize the shell model Monte Carlo method to study the structure of rare earth nuclei. This work demonstrates the first systematic full oscillator shell with intruder calculations in such heavy nuclei. Exact solutions of a pairing plus quadrupole Hamiltonian are compared with the static path approximation in several dysprosium isotopes from A=152 to 162, including the odd mass A=153. Some comparisons are also made with Hartree-Fock-Bogoliubov results from Baranger and Kumar. Basic properties of these nuclei at various temperatures and spin are explored. These include energy, deformation, moments of inertia, pairing channel strengths, band crossing, and evolution of shell model occupation numbers. Exact level densities are also calculated and, in the case of 162 Dy, compared with experimental data. (c) 2000 The American Physical Society

  8. Level Densities and Radiative Strength Functions in 170,171Yb

    International Nuclear Information System (INIS)

    Agvaanluvsan, U.; Schiller, A.; Becker, J.A.; Berstein, L.A.; Guttormsen, M.; Mitchell, G.E.; Rekstad, J.; Siem, S.; Voinov, A.

    2003-01-01

    Level densities and radiative strength functions in 171 Yb and 170 Yb nuclei have been measured with the 171 Yb( 3 He, 3 He(prime) γ) 171 Yb and 171 Yb( 3 He, αγ) 170 Yb reactions. A simultaneous determination of the nuclear level density and the radiative strength function was made. The present data adds to and is consistent with previous results for several other rare earth nuclei. The method will be briefly reviewed and the result from the analysis will be presented. The radiative strength function for 171 Yb is compared to previously published work.

  9. Solvent effects on excited-state structures: A quantum Monte Carlo and density functional study

    NARCIS (Netherlands)

    Guareschi, R.; Floris, F.M.; Amovilli, C.; Filippi, Claudia

    2014-01-01

    We present the first application of quantum Monte Carlo (QMC) in its variational flavor combined with the polarizable continuum model (PCM) to perform excited-state geometry optimization in solution. Our implementation of the PCM model is based on a reaction field that includes both volume and

  10. Monte Carlo analysis of highly compressed fissile assemblies. Pt. 1

    International Nuclear Information System (INIS)

    Raspet, R.; Baird, G.E.

    1978-01-01

    Laserinduced fission of highly compressed bare fissionable spheres is analyzed using Monte Carlo techniques. The critical mass and critical radius as a function of density are calculated and the fission energy yield is calculated and compared with the input laser energy necessary to achieve compression to criticality. (orig.) [de

  11. Monte-Carlo calculations of light nuclei with the Reid potential

    Energy Technology Data Exchange (ETDEWEB)

    Lomnitz-Adler, J. (Universidad Nacional Autonoma de Mexico, Mexico City. Inst. de Fisica)

    1981-01-01

    A Monte-Carlo method is developed to calculate the binding energy and density distribution of the /sup 3/H and /sup 4/He nuclei for a variational wave function written as a symmetrized product of correlation operators. The upper bounds obtained with the Reid potential are -6.86 +- .08 and -22.9 +- .5 MeV respectively. The Coulomb interaction in /sup 4/He is ignored. The calculated density distributions have reasonable radii, but they do not show any dip at the center.

  12. Monte-Carlo calculations of light nuclei with the Reid potential

    International Nuclear Information System (INIS)

    Lomnitz-Adler, J.

    1981-01-01

    A Monte-Carlo method is developed to calculate the binding energy and density distribution of the 3 H and 4 He nuclei for a variational wave function written as a symmetrized product of correlation operators. The upper bounds obtained with the Reid potential are -6.86 +- .08 and -22.9 +- .5 MeV respectively. The Coulomb interaction in 4 He is ignored. The calculated density distributions have reasonable radii, but they do not show any dip at the center. (author)

  13. Enhanced Monte-Carlo-Linked Depletion Capabilities in MCNPX

    International Nuclear Information System (INIS)

    Fensin, Michael L.; Hendricks, John S.; Anghaie, Samim

    2006-01-01

    As advanced reactor concepts challenge the accuracy of current modeling technologies, a higher-fidelity depletion calculation is necessary to model time-dependent core reactivity properly for accurate cycle length and safety margin determinations. The recent integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte-Carlo-linked depletion capability. Two advances have been made in the latest MCNPX capability based on problems observed in pre-released versions: continuous energy collision density tracking and proper fission yield selection. Pre-released versions of the MCNPX depletion code calculated the reaction rates for (n,2n), (n,3n), (n,p), (n,a), and (n,?) by matching the MCNPX steady-state 63-group flux with 63-group cross sections inherent in the CINDER90 library and then collapsing to one-group collision densities for the depletion calculation. This procedure led to inaccuracies due to the miscalculation of the reaction rates resulting from the collapsed multi-group approach. The current version of MCNPX eliminates this problem by using collapsed one-group collision densities generated from continuous energy reaction rates determined during the MCNPX steady-state calculation. MCNPX also now explicitly determines the proper fission yield to be used by the CINDER90 code for the depletion calculation. The CINDER90 code offers a thermal, fast, and high-energy fission yield for each fissile isotope contained in the CINDER90 data file. MCNPX determines which fission yield to use for a specified problem by calculating the integral fission rate for the defined energy boundaries (thermal, fast, and high energy), determining which energy range contains the majority of fissions, and then selecting the appropriate fission yield for the energy range containing the majority of fissions. The MCNPX depletion capability enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code

  14. Adaptive Multilevel Monte Carlo Simulation

    KAUST Repository

    Hoel, H

    2011-08-23

    This work generalizes a multilevel forward Euler Monte Carlo method introduced in Michael B. Giles. (Michael Giles. Oper. Res. 56(3):607–617, 2008.) for the approximation of expected values depending on the solution to an Itô stochastic differential equation. The work (Michael Giles. Oper. Res. 56(3):607– 617, 2008.) proposed and analyzed a forward Euler multilevelMonte Carlo method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a standard, single level, Forward Euler Monte Carlo method. This work introduces an adaptive hierarchy of non uniform time discretizations, generated by an adaptive algorithmintroduced in (AnnaDzougoutov et al. Raùl Tempone. Adaptive Monte Carlo algorithms for stopped diffusion. In Multiscale methods in science and engineering, volume 44 of Lect. Notes Comput. Sci. Eng., pages 59–88. Springer, Berlin, 2005; Kyoung-Sook Moon et al. Stoch. Anal. Appl. 23(3):511–558, 2005; Kyoung-Sook Moon et al. An adaptive algorithm for ordinary, stochastic and partial differential equations. In Recent advances in adaptive computation, volume 383 of Contemp. Math., pages 325–343. Amer. Math. Soc., Providence, RI, 2005.). This form of the adaptive algorithm generates stochastic, path dependent, time steps and is based on a posteriori error expansions first developed in (Anders Szepessy et al. Comm. Pure Appl. Math. 54(10):1169– 1214, 2001). Our numerical results for a stopped diffusion problem, exhibit savings in the computational cost to achieve an accuracy of ϑ(TOL),from(TOL−3), from using a single level version of the adaptive algorithm to ϑ(((TOL−1)log(TOL))2).

  15. Monte Carlo simulation: tool for the calibration in analytical determination of radionuclides

    International Nuclear Information System (INIS)

    Gonzalez, Jorge A. Carrazana; Ferrera, Eduardo A. Capote; Gomez, Isis M. Fernandez; Castro, Gloria V. Rodriguez; Ricardo, Niury Martinez

    2013-01-01

    This work shows how is established the traceability of the analytical determinations using this calibration method. Highlights the advantages offered by Monte Carlo simulation for the application of corrections by differences in chemical composition, density and height of the samples analyzed. Likewise, the results obtained by the LVRA in two exercises organized by the International Agency for Atomic Energy (IAEA) are presented. In these exercises (an intercomparison and a proficiency test) all reported analytical results were obtained based on calibrations in efficiency by Monte Carlo simulation using the DETEFF program

  16. Effect of sample moisture and bulk density on performance of the 241Am-Be source based prompt gamma rays neutron activation analysis setup. A Monte Carlo study

    International Nuclear Information System (INIS)

    Almisned, Ghada

    2010-01-01

    Monte Carlo simulations were carried out using the dependence of gamma ray yield on the bulk density and moisture content for five different lengths of Portland cement samples in a thermal neutron capture based Prompt Gamma ray Neutron Activation Analysis (PGNAA) setup for source inside moderator geometry using an 241 Am-Be neutron source. In this study, yields of 1.94 and 6.42 MeV prompt gamma rays from calcium in the five Portland cement samples were calculated as a function of sample bulk density and moisture content. The study showed a strong dependence of the 1.94 and 6.42 MeV gamma ray yield upon the sample bulk density but a weaker dependence upon sample moisture content. For an order of magnitude increase in the sample bulk density, an order of magnitude increase in the gamma rays yield was observed, i.e., a one-to-one correspondence. In case of gamma ray yield dependence upon sample moisture content, an order of magnitude increase in the moisture content of the sample resulted in about 16-17% increase in the yield of 1.94 and 6.42 MeV gamma rays from calcium. (author)

  17. Level densities of iron isotopes and lower-energy enhancement of y-strength function

    International Nuclear Information System (INIS)

    Voinov, A V; Grimes, S M; Agvaanluvsan, U; Algin, E; Belgya, T; Brune, C R; Guttormsen, M; Hornish, M J; Massey, T N; Mitchell, G; Rekstad, J; Schiller, A; Siem, S

    2005-01-01

    The neutron spectrum from the 55 Mn(d,n) 56 Fe reaction has been measured at E d = 7 MeV. The level density of 56 Fe obtained from neutron evaporation spectrum has been compared to the level density from Oslo-type 57 Fe( 3 He, aγ) 56 Fe experiment [1]. The good agreement supports the recent results [1, 8] including an availability of a low-energy enhancement in the γ-strength function for iron isotopes. The new level density function allowed us to investigate an excitation energy dependence of this enhancement, which is shown to increase with increasing excitation energy

  18. Application of monte-carlo method in definition of key categories of most radioactive polluted soil

    Energy Technology Data Exchange (ETDEWEB)

    Mahmudov, H M; Valibeyova, G; Jafarov, Y D; Musaeva, Sh Z [Institute of Radiation Problems, Azerbaijan National Academy of Sciences, Baku (Azerbaijan); others, and

    2006-10-15

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capasites of radiation and data on activity within the boundaries of their individual density of frequency distribution of exposition doses capacities.The analysis using Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainly in reports.Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report.Relative uncertainly of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of resources available for preparation and to prepare possible estimations for the most significant categories of sources.Usage of the notion {sup u}ncertainty{sup i}n reports also allows to set threshold value for a key category of sources, if it necessary, for exact reflection of 90 per cent uncertainty in reports.According to radiation safety norms level of radiation backgrounds exceeding 33 mkR/hour is considered dangerous.By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of polluted soil.

  19. Application of monte-carlo method in definition of key categories of most radioactive polluted soil

    International Nuclear Information System (INIS)

    Mahmudov, H.M; Valibeyova, G.; Jafarov, Y.D; Musaeva, Sh.Z

    2006-01-01

    Full text: The principle of analysis by Monte Carlo method consists of a choice of random variables of coefficients of an exposition doze capasites of radiation and data on activity within the boundaries of their individual density of frequency distribution of exposition doses capacities.The analysis using Monte Carlo method is useful for realization of sensitivity analysis of measured capacity amount of an exposition dose in order to define the major factors causing uncertainly in reports.Reception of such conceptions can be valuable for definition of key categories of radiation polluted soil and establishment of priorities to use resources for enhancement of the report.Relative uncertainly of radiation polluted soil categories determined with the help of the analysis by Monte Carlo method in case of their availability can be applied using more significant divergence between average value and a confidential limit in case when borders of resources available for preparation and to prepare possible estimations for the most significant categories of sources.Usage of the notion u ncertainty i n reports also allows to set threshold value for a key category of sources, if it necessary, for exact reflection of 90 per cent uncertainty in reports.According to radiation safety norms level of radiation backgrounds exceeding 33 mkR/hour is considered dangerous.By calculated Monte Carlo method much more dangerous sites and sites frequently imposed to disposals and utilization were chosen from analyzed samples of polluted soil.

  20. Magnetism of iron and nickel from rotationally invariant Hirsch-Fye quantum Monte Carlo calculations

    Science.gov (United States)

    Belozerov, A. S.; Leonov, I.; Anisimov, V. I.

    2013-03-01

    We present a rotationally invariant Hirsch-Fye quantum Monte Carlo algorithm in which the spin rotational invariance of Hund's exchange is approximated by averaging over all possible directions of the spin quantization axis. We employ this technique to perform benchmark calculations for the two- and three-band Hubbard models on the infinite-dimensional Bethe lattice. Our results agree quantitatively well with those obtained using the continuous-time quantum Monte Carlo method with rotationally invariant Coulomb interaction. The proposed approach is employed to compute the electronic and magnetic properties of paramagnetic α iron and nickel. The obtained Curie temperatures agree well with experiment. Our results indicate that the magnetic transition temperature is significantly overestimated by using the density-density type of Coulomb interaction.

  1. Multilevel Monte Carlo in Approximate Bayesian Computation

    KAUST Repository

    Jasra, Ajay

    2017-02-13

    In the following article we consider approximate Bayesian computation (ABC) inference. We introduce a method for numerically approximating ABC posteriors using the multilevel Monte Carlo (MLMC). A sequential Monte Carlo version of the approach is developed and it is shown under some assumptions that for a given level of mean square error, this method for ABC has a lower cost than i.i.d. sampling from the most accurate ABC approximation. Several numerical examples are given.

  2. Quantum Monte Carlo formulation of volume polarization in dielectric continuum theory

    NARCIS (Netherlands)

    Amovilli, Claudio; Filippi, Claudia; Floris, Franca Maria

    2008-01-01

    We present a novel formulation based on quantum Monte Carlo techniques for the treatment of volume polarization due to quantum mechanical penetration of the solute charge density in the solvent domain. The method allows to accurately solve Poisson’s equation of the solvation model coupled with the

  3. Efficient Monte Carlo Simulations of Gas Molecules Inside Porous Materials.

    Science.gov (United States)

    Kim, Jihan; Smit, Berend

    2012-07-10

    Monte Carlo (MC) simulations are commonly used to obtain adsorption properties of gas molecules inside porous materials. In this work, we discuss various optimization strategies that lead to faster MC simulations with CO2 gas molecules inside host zeolite structures used as a test system. The reciprocal space contribution of the gas-gas Ewald summation and both the direct and the reciprocal gas-host potential energy interactions are stored inside energy grids to reduce the wall time in the MC simulations. Additional speedup can be obtained by selectively calling the routine that computes the gas-gas Ewald summation, which does not impact the accuracy of the zeolite's adsorption characteristics. We utilize two-level density-biased sampling technique in the grand canonical Monte Carlo (GCMC) algorithm to restrict CO2 insertion moves into low-energy regions within the zeolite materials to accelerate convergence. Finally, we make use of the graphics processing units (GPUs) hardware to conduct multiple MC simulations in parallel via judiciously mapping the GPU threads to available workload. As a result, we can obtain a CO2 adsorption isotherm curve with 14 pressure values (up to 10 atm) for a zeolite structure within a minute of total compute wall time.

  4. Evaluation of the Automatic Density Compensation for Pressurizer Level Measurement

    International Nuclear Information System (INIS)

    Jeong, Insoo; Min, Seohong; Ahn, Myunghoon

    2014-01-01

    When using two transmitters, it is difficult for the operators to identify the correct level of the pressurizer (PZR) upon failure of one of the two transmitters. For this reason, Korean Utility Requirements Document (KURD) requires that the operators to use three independent level indicators. Two hot calibrated transmitters and one cold calibrated transmitter compose PZR level transmitters in APR1400. In this paper, the deviation between cold calibration and hot calibration is evaluated, and the application of compensated PZR level measurement and uncompen-sated PZR level measurement during the normal operation of APR1400 are introduced. The PZR level signals for APR1400 come in three channels. To satisfy the KURD requirements for PZR level measurement, and at the same time to accomplish correction design and implementation, applicability and differences between hot calibration and cold calibration, compensated level and uncompensated level were evaluated as follows: For proper indication of PZR levels under normal operating condition, two of the three transmitters went through hot calibration and the remaining one transmitter went through cold calibration. This was to allow indicating entire regions of PZR regardless of the plant operation modes. For automatic density compensation per KURD requirements, the algorithm of the density compensated PZR level implemented in the DCS controller and PRV logic is adopted as a signal validation method

  5. Shampoo, Soy Sauce, and the Prince's Pendant: Density for Middle-Level Students

    Science.gov (United States)

    Chandrasekhar, Meera; Litherland, Rebecca

    2006-01-01

    In this article, the authors describe a series of activities they have used with middle-level students. The first set of lessons explores density through the layering of liquids. In the second set, they use some of the same liquids to explore the density of solids. The third set investigates how temperature affects the density of…

  6. Continuum corrections to the level density and its dependence on excitation energy, n-p asymmetry, and deformation

    International Nuclear Information System (INIS)

    Charity, R.J.; Sobotka, L.G.

    2005-01-01

    In the independent-particle model, the nuclear level density is determined from the neutron and proton single-particle level densities. The single-particle level density for the positive-energy continuum levels is important at high excitation energies for stable nuclei and at all excitation energies for nuclei near the drip lines. This single-particle level density is subdivided into compound-nucleus and gas components. Two methods are considered for this subdivision: In the subtraction method, the single-particle level density is determined from the scattering phase shifts. In the Gamov method, only the narrow Gamov states or resonances are included. The level densities calculated with these two methods are similar; both can be approximated by the backshifted Fermi-gas expression with level-density parameters that are dependent on A, but with very little dependence on the neutron or proton richness of the nucleus. However, a small decrease in the level-density parameter is predicted for some nuclei very close to the drip lines. The largest difference between the calculations using the two methods is the deformation dependence of the level density. The Gamov method predicts a very strong peaking of the level density at sphericity for high excitation energies. This leads to a suppression of deformed configurations and, consequently, the fission rate predicted by the statistical model is reduced in the Gamov method

  7. Exploring effective interactions through transition charge density ...

    Indian Academy of Sciences (India)

    tematics like reduced transition probabilities B(E2) and static quadrupole moments Q(2) ... approximations of solving large scale shell model problems in Monte Carlo meth- ... We present the theoretical study of transition charge densities.

  8. Serum oxidized low density lipoprotein levels in preeclamptic and normotensive pregnants.

    Science.gov (United States)

    Kozan, A; Yildirmak, S Turkmen; Mihmanli, V; Ayabakan, H; Cicek, Y G; Kalaslioglu, V; Doean, S; Cebeci, H Cerci

    2015-01-01

    BACKGROUNDS/AIM: The aim of the study was to determine serum lipids and oxidized low density lipoprotein (ox-LDL) levels in preeclamptic pregnants and compare with those of normotensives. Ox-LDL levels were determined by enzyme linked immunosorbent assay (ELISA); total cholesterol, hight density lipoprotein (HDL)-cholesterol and triglyceride levels were measured by enzymatic colorimetric assay in 26 normotensive and 27 preeclamptic pregnants. LDL and very low density lipoprotein (VLDL) cholesterol was calculated by Friedwald formula. Serum levels of Ox-LDL (U/L), total-cholesterol (mg/dL), HDL-cholesterol (mg/dL), LDL-cholesterol (mg/dL), triglyceride (mg/dL), and VLDL-cholesterol (mg/dL) in normotensive and preeclamptic pregnants were found as 130±60 and 133±69; 248±49 and 248±81; 67±14 and 61±16; 147±61 and 135±59; 207±76 and 256±87; 41±15 and 50±17, respectively. Mean values of Ox-LDL and other lipid parameters were higher than the upper limits of their reference ranges in both of groups. However no significant differences were found in Ox-LDL, total, HDL and LDL-cholesterol levels between two groups. However, the levels of triglyceride and VLDL-cholesterol were significantly higher in preeclampsia group. The present results suggest that the levels of serum Ox-LDL and other lipid parameters rise as a result of pregnancy rather than as a result of preeclampsia.

  9. Diffusion quantum Monte Carlo for molecules

    International Nuclear Information System (INIS)

    Lester, W.A. Jr.

    1986-07-01

    A quantum mechanical Monte Carlo method has been used for the treatment of molecular problems. The imaginary-time Schroedinger equation written with a shift in zero energy [E/sub T/ - V(R)] can be interpreted as a generalized diffusion equation with a position-dependent rate or branching term. Since diffusion is the continuum limit of a random walk, one may simulate the Schroedinger equation with a function psi (note, not psi 2 ) as a density of ''walks.'' The walks undergo an exponential birth and death as given by the rate term. 16 refs., 2 tabs

  10. Gamma-ray yield dependence on bulk density and moisture content of a sample of a PGNAA setup. A Monte Carlo study

    International Nuclear Information System (INIS)

    Nagadi, M.M.; Naqvi, A.A.

    2007-01-01

    Monte Carlo calculations were carried out to study the dependence of γ-ray yield on the bulk density and moisture content of a sample in a thermalneutron capture-based prompt gamma neutron activation analysis (PGNAA) setup. The results of the study showed a strong dependence of the γ-ray yield upon the sample bulk density. An order of magnitude increase in yield of 1.94 and 6.42 MeV prompt γ-rays from calcium in a Portland cement sample was observed for a corresponding order of magnitude increase in the sample bulk density. On the contrary the γ-ray yield has a weak dependence on sample moisture content and an increase of only 20% in yield of 1.94 and 6.42 MeV prompt γ-rays from calcium in the Portland cement sample was observed for an order of magnitude increase in the moisture content of the Portland cement sample. A similar effect of moisture content has been observed on the yield of 1.167 MeV prompt γ-rays from chlorine contaminants in Portland cement samples. For an order of magnitude increase in the moisture content of the sample, a 7 to 12% increase in the yield of the 1.167 MeV chlorine γ-ray was observed for the Portland cement samples containing 1 to 5 wt.% chlorine contaminants. This study has shown that effects of sample moisture content on prompt γ-ray yield from constituents of a Portland cement sample are insignificant in a thermal-neutrons capture-based PGNAA setup. (author)

  11. The specific bias in dynamic Monte Carlo simulations of nuclear reactors

    International Nuclear Information System (INIS)

    Yamamoto, T.; Endo, H.; Ishizu, T.; Tatewaki, I.

    2013-01-01

    During the development of Monte-Carlo-based dynamic code system, we have encountered two major Monte-Carlo-specific problems. One is the break down due to 'false super-criticality' which is caused by an accidentally large eigenvalue due to statistical error in spite of the fact that the reactor is actually not critical. The other problem, which is the main topic in this paper, is that the statistical error in power level using the reactivity calculated with Monte Carlo code is not symmetric about its mean but always positively biased. This signifies that the bias is accumulated as the calculation proceeds and consequently results in an over-estimation of the final power level. It should be noted that the bias will not be eliminated by refining the time step as long as the variance is not zero. A preliminary investigation on this matter using the one-group-precursor point kinetic equations was made and it was concluded that the bias in power level is approximately proportional to the product of variance in Monte Carlo calculation and elapsed time. This conclusion was verified with some numerical experiments. This outcome is important in quantifying the required precision of the Monte-Carlo-based reactivity calculations. (authors)

  12. A continuation multilevel Monte Carlo algorithm

    KAUST Repository

    Collier, Nathan; Haji Ali, Abdul Lateef; Nobile, Fabio; von Schwerin, Erik; Tempone, Raul

    2014-01-01

    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error

  13. IAEA advisory group meeting on basic and applied problems of nuclear level densities

    International Nuclear Information System (INIS)

    Bhat, M.R.

    1983-06-01

    Separate entries were made in the data base for 17 of the 19 papers included. Two papers were previously included in the data base. Workshop reports are included on (1) nuclear level density theories and nuclear model reaction cross-section calculations and (2) extraction of nuclear level density information from experimental data

  14. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Faculty of Applied Sciences, Delft University of Technology (Netherlands); Martin, William R., E-mail: wrm@umich.edu [Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI (United States); Petrovic, Bojan, E-mail: Bojan.Petrovic@gatech.edu [Nuclear and Radiological Engineering, Georgia Institute of Technology, Atlanta, GA (United States)

    2011-07-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  15. The Monte Carlo performance benchmark test - AIMS, specifications and first results

    International Nuclear Information System (INIS)

    Hoogenboom, J. Eduard; Martin, William R.; Petrovic, Bojan

    2011-01-01

    The Monte Carlo performance benchmark for detailed power density calculation in a full-size reactor core is organized under the auspices of the OECD NEA Data Bank. It aims at monitoring over a range of years the increase in performance, measured in terms of standard deviation and computer time, of Monte Carlo calculation of the power density in small volumes. A short description of the reactor geometry and composition is discussed. One of the unique features of the benchmark exercise is the possibility to upload results from participants at a web site of the NEA Data Bank which enables online analysis of results and to graphically display how near we are at the goal of doing a detailed power distribution calculation with acceptable statistical uncertainty in an acceptable computing time. First results are discussed which show that 10 to 100 billion histories must be simulated to reach a standard deviation of a few percent in the estimated power of most of the requested the fuel zones. Even when using a large supercomputer, a considerable speedup is still needed to reach the target of 1 hour computer time. An outlook is given of what to expect from this benchmark exercise over the years. Possible extensions of the benchmark for specific issues relevant in current Monte Carlo calculation for nuclear reactors are also discussed. (author)

  16. Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Bardenet Rémi

    2013-07-01

    Full Text Available Bayesian inference often requires integrating some function with respect to a posterior distribution. Monte Carlo methods are sampling algorithms that allow to compute these integrals numerically when they are not analytically tractable. We review here the basic principles and the most common Monte Carlo algorithms, among which rejection sampling, importance sampling and Monte Carlo Markov chain (MCMC methods. We give intuition on the theoretical justification of the algorithms as well as practical advice, trying to relate both. We discuss the application of Monte Carlo in experimental physics, and point to landmarks in the literature for the curious reader.

  17. Generalized Freud's equation and level densities with polynomial potential

    Science.gov (United States)

    Boobna, Akshat; Ghosh, Saugata

    2013-08-01

    We study orthogonal polynomials with weight $\\exp[-NV(x)]$, where $V(x)=\\sum_{k=1}^{d}a_{2k}x^{2k}/2k$ is a polynomial of order 2d. We derive the generalised Freud's equations for $d=3$, 4 and 5 and using this obtain $R_{\\mu}=h_{\\mu}/h_{\\mu -1}$, where $h_{\\mu}$ is the normalization constant for the corresponding orthogonal polynomials. Moments of the density functions, expressed in terms of $R_{\\mu}$, are obtained using Freud's equation and using this, explicit results of level densities as $N\\rightarrow\\infty$ are derived.

  18. Atomistic kinetic Monte Carlo study of atomic layer deposition derived from density functional theory.

    Science.gov (United States)

    Shirazi, Mahdi; Elliott, Simon D

    2014-01-30

    To describe the atomic layer deposition (ALD) reactions of HfO2 from Hf(N(CH3)2)4 and H2O, a three-dimensional on-lattice kinetic Monte-Carlo model is developed. In this model, all atomistic reaction pathways in density functional theory (DFT) are implemented as reaction events on the lattice. This contains all steps, from the early stage of adsorption of each ALD precursor, kinetics of the surface protons, interaction between the remaining precursors (steric effect), influence of remaining fragments on adsorption sites (blocking), densification of each ALD precursor, migration of each ALD precursors, and cooperation between the remaining precursors to adsorb H2O (cooperative effect). The essential chemistry of the ALD reactions depends on the local environment at the surface. The coordination number and a neighbor list are used to implement the dependencies. The validity and necessity of the proposed reaction pathways are statistically established at the mesoscale. The formation of one monolayer of precursor fragments is shown at the end of the metal pulse. Adsorption and dissociation of the H2O precursor onto that layer is described, leading to the delivery of oxygen and protons to the surface during the H2O pulse. Through these processes, the remaining precursor fragments desorb from the surface, leaving the surface with bulk-like and OH-terminated HfO2, ready for the next cycle. The migration of the low coordinated remaining precursor fragments is also proposed. This process introduces a slow reordering motion (crawling) at the mesoscale, leading to the smooth and conformal thin film that is characteristic of ALD. Copyright © 2013 Wiley Periodicals, Inc.

  19. Classical density functional theory and Monte Carlo simulation study of electric double layer in the vicinity of a cylindrical electrode

    Science.gov (United States)

    Zhou, Shiqi; Lamperski, Stanisław; Sokołowska, Marta

    2017-07-01

    We have performed extensive Monte-Carlo simulations and classical density functional theory (DFT) calculations of the electrical double layer (EDL) near a cylindrical electrode in a primitive model (PM) modified by incorporating interionic dispersion interactions. It is concluded that (i) in general, an unsophisticated use of the mean field (MF) approximation for the interionic dispersion interactions does not distinctly worsen the classical DFT performance, even if the salt ions considered are highly asymmetrical in size (3:1) and charge (5:1), the bulk molar concentration considered is high up to a total bulk ion packing fraction of 0.314, and the surface charge density of up to 0.5 C m-2. (ii) More specifically, considering the possible noises in the simulation, the local volume charge density profiles are the most accurately predicted by the classical DFT in all situations, and the co- and counter-ion singlet distributions are also rather accurately predicted; whereas the mean electrostatic potential profile is relatively less accurately predicted due to an integral amplification of minor inaccuracy of the singlet distributions. (iii) It is found that the layered structure of the co-ion distribution is abnormally possible only if the surface charge density is high enough (for example 0.5 C m-2) moreover, the co-ion valence abnormally influences the peak height of the first counter-ion layer, which decreases with the former. (iv) Even if both the simulation and DFT indicate an insignificant contribution of the interionic dispersion interaction to the above three ‘local’ quantities, it is clearly shown by the classical DFT that the interionic dispersion interaction does significantly influence a ‘global’ quantity like the cylinder surface-aqueous electrolyte interfacial tension, and this may imply the role of the interionic dispersion interaction in explaining the specific Hofmeister effects. We elucidate all of the above observations based on the

  20. Geometry and Dynamics for Markov Chain Monte Carlo

    OpenAIRE

    Barp, Alessandro; Briol, Francois-Xavier; Kennedy, Anthony D.; Girolami, Mark

    2017-01-01

    Markov Chain Monte Carlo methods have revolutionised mathematical computation and enabled statistical inference within many previously intractable models. In this context, Hamiltonian dynamics have been proposed as an efficient way of building chains which can explore probability densities efficiently. The method emerges from physics and geometry and these links have been extensively studied by a series of authors through the last thirty years. However, there is currently a gap between the in...

  1. The effect of serum magnesium levels and serum endothelin-1 levels on bone mineral density in protein energy malnutrition.

    Science.gov (United States)

    Ozturk, C F; Karakelleoglu, C; Orbak, Z; Yildiz, L

    2012-06-01

    An inadequate and imbalanced intake of protein and energy results in protein-energy malnutrition (PEM). It is known that bone mineral density and serum magnesium levels are low in malnourished children. However, the roles of serum magnesium and endothelin-1 (ET-1) levels in the pathophysiology of bone mineralization are obscure. Thus, the relationships between serum magnesium and ET-1 levels and the changes in bone mineral density were investigated in this study. There was a total of 32 subjects, 25 of them had PEM and seven were controls. While mean serum ET-1 levels of the children with kwashiorkor and marasmus showed no statistically significant difference, mean serum ET-1 levels of both groups were significantly higher than that of the control group. Serum magnesium levels were lower than normal value in 9 (36%) of 25 malnourished children. Malnourished children included in this study were divided into two subgroups according to their serum magnesium levels. While mean serum ET-1 levels in the group with low magnesium levels were significantly higher than that of the group with normal magnesium levels (p malnutrition. Our study suggested that lower magnesium levels and higher ET-1 levels might be important factors in changes of bone mineral density in malnutrition. We recommend that the malnourished patients, especially with hypomagnesaemia, should be treated with magnesium early.

  2. Monte Carlo strategies in scientific computing

    CERN Document Server

    Liu, Jun S

    2008-01-01

    This paperback edition is a reprint of the 2001 Springer edition This book provides a self-contained and up-to-date treatment of the Monte Carlo method and develops a common framework under which various Monte Carlo techniques can be "standardized" and compared Given the interdisciplinary nature of the topics and a moderate prerequisite for the reader, this book should be of interest to a broad audience of quantitative researchers such as computational biologists, computer scientists, econometricians, engineers, probabilists, and statisticians It can also be used as the textbook for a graduate-level course on Monte Carlo methods Many problems discussed in the alter chapters can be potential thesis topics for masters’ or PhD students in statistics or computer science departments Jun Liu is Professor of Statistics at Harvard University, with a courtesy Professor appointment at Harvard Biostatistics Department Professor Liu was the recipient of the 2002 COPSS Presidents' Award, the most prestigious one for sta...

  3. Effect of interstitial low level laser stimulation in skin density

    Science.gov (United States)

    Jang, Seulki; Ha, Myungjin; Lee, Sangyeob; Yu, Sungkon; Park, Jihoon; Radfar, Edalat; Hwang, Dong Hyun; Lee, Han A.; Kim, Hansung; Jung, Byungjo

    2016-03-01

    As the interest in skin was increased, number of studies on skin care also have been increased. The reduction of skin density is one of the symptoms of skin aging. It reduces elasticity of skin and becomes the reason of wrinkle formation. Low level laser therapy (LLLT) has been suggested as one of the effective therapeutic methods for skin aging as in hasten to change skin density. This study presents the effect of a minimally invasive laser needle system (MILNS) (wavelength: 660nm, power: 20mW) in skin density. Rabbits were divided into three groups. Group 1 didn't receive any laser stimulation as a control group. Group 2 and 3 as test groups were exposed to MILNS with energy of 8J and 6J on rabbits' dorsal side once a week, respectively. Skin density of rabbits was measured every 12 hours by using an ultrasound skin scanner.

  4. Extensions of the MCNP5 and TRIPOLI4 Monte Carlo Codes for Transient Reactor Analysis

    Science.gov (United States)

    Hoogenboom, J. Eduard; Sjenitzer, Bart L.

    2014-06-01

    To simulate reactor transients for safety analysis with the Monte Carlo method the generation and decay of delayed neutron precursors is implemented in the MCNP5 and TRIPOLI4 general purpose Monte Carlo codes. Important new variance reduction techniques like forced decay of precursors in each time interval and the branchless collision method are included to obtain reasonable statistics for the power production per time interval. For simulation of practical reactor transients also the feedback effect from the thermal-hydraulics must be included. This requires coupling of the Monte Carlo code with a thermal-hydraulics (TH) code, providing the temperature distribution in the reactor, which affects the neutron transport via the cross section data. The TH code also provides the coolant density distribution in the reactor, directly influencing the neutron transport. Different techniques for this coupling are discussed. As a demonstration a 3x3 mini fuel assembly with a moving control rod is considered for MCNP5 and a mini core existing of 3x3 PWR fuel assemblies with control rods and burnable poisons for TRIPOLI4. Results are shown for reactor transients due to control rod movement or withdrawal. The TRIPOLI4 transient calculation is started at low power and includes thermal-hydraulic feedback. The power rises about 10 decades and finally stabilises the reactor power at a much higher level than initial. The examples demonstrate that the modified Monte Carlo codes are capable of performing correct transient calculations, taking into account all geometrical and cross section detail.

  5. Extensions of the MCNP5 and TRIPOLI4 Monte Carlo codes for transient reactor analysis

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    2013-01-01

    To simulate reactor transients for safety analysis with the Monte Carlo method the generation and decay of delayed neutron precursors is implemented in the MCNP5 and TRIPOLI4 general purpose Monte Carlo codes. Important new variance reduction techniques like forced decay of precursors in each time interval and the branch-less collision method are included to obtain reasonable statistics for the power production per time interval. For simulation of practical reactor transients also the feedback effect from the thermal-hydraulics must be included. This requires the coupling of the Monte Carlo code with a thermal-hydraulics (TH) code, providing the temperature distribution in the reactor, which affects the neutron transport via the cross section data. The TH code also provides the coolant density distribution in the reactor, directly influencing the neutron transport. Different techniques for this coupling are discussed. As a demonstration a 3*3 mini fuel assembly with a moving control rod is considered for MCNP5 and a mini core existing of 3*3 PWR fuel assemblies with control rods and burnable poisons for TRIPOLI4. Results are shown for reactor transients due to control rod movement or withdrawal. The TRIPOLI4 transient calculation is started at low power and includes thermal-hydraulic feedback. The power rises about 10 decades and finally stabilises the reactor power at a much higher level than initial. The examples demonstrate that the modified Monte Carlo codes are capable of performing correct transient calculations, taking into account all geometrical and cross section detail. (authors)

  6. Experimental nuclear level densities and γ-ray strength functions in Sc and V isotopes

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Ingebretsen, F.; Messelt, S.; Rekstad, J.; Siem, S.; Syed, N. U. H.; Chankova, R.; Loennroth, T.; Schiller, A.; Voinov, A.

    2008-01-01

    The nuclear physics group at the Oslo Cyclotron Laboratory has developed a method to extract nuclear level density and γ-ray strength function from first-generation γ-ray spectra. This method is applied on the nuclei 44,45 Sc and 50,51 V in this work. The experimental level densities of 44,45 Sc are compared to calculated level densities using a microscopic model based on BCS quasiparticles within the Nilsson level scheme. The γ-ray strength functions are also compared to theoretical expectations, showing an unexpected enhancement of the γ-ray strength for low γ energies (E γ ≤3 MeV) in all the isotopes studied here. The physical origin of this enhancement is not yet understood

  7. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Univ. of New Mexico, Albuquerque, NM

    2016-01-01

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  8. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Monte Carlo Methods, Codes, and Applications Group; Univ. of New Mexico, Albuquerque, NM (United States). Nuclear Engineering Dept.

    2016-11-29

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. These lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations

  9. A Monte Carlo Simulation Framework for Testing Cosmological Models

    Directory of Open Access Journals (Sweden)

    Heymann Y.

    2014-10-01

    Full Text Available We tested alternative cosmologies using Monte Carlo simulations based on the sam- pling method of the zCosmos galactic survey. The survey encompasses a collection of observable galaxies with respective redshifts that have been obtained for a given spec- troscopic area of the sky. Using a cosmological model, we can convert the redshifts into light-travel times and, by slicing the survey into small redshift buckets, compute a curve of galactic density over time. Because foreground galaxies obstruct the images of more distant galaxies, we simulated the theoretical galactic density curve using an average galactic radius. By comparing the galactic density curves of the simulations with that of the survey, we could assess the cosmologies. We applied the test to the expanding-universe cosmology of de Sitter and to a dichotomous cosmology.

  10. Nuclear level density effects on the evaluated cross-sections of nickel isotopes

    International Nuclear Information System (INIS)

    Garg, S.B.

    1995-01-01

    A detailed investigation has been made to estimate the effect of various level density options on the computed neutron induced reaction cross-sections of Ni-58 and Ni-60 covering the energy range 5-25 MeV in the framework of the multistep Hauser-Feshbach statistical model scheme which accounts for the pre-equilibrium decay according to the Kalbach exciton model and gamma-ray competition according to the giant dipole radiation model of Brink and Axel. Various level density options considered in this paper are based on the Original Gilbert-Cameron, Improved Gilbert-Cameron, Back-Shifted Fermi gas and the Ingatyuk-Smirenkin-Tishin approaches. The effect of these different level density prescriptions is brought out with special reference to (n,p) (n,2n) (n,α) and total production cross-sections for neutron, hydrogen, helium and gamma-rays which are of technological importance for fission and fusion based reactor systems. (author). 18 refs, 2 figs

  11. Evaluation of cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Woo, Sang Keun; Kim, Wook; Park, Yong Sung; Kang, Joo Hyun; Lee, Yong Jin [Korea Institute of Radiological and Medical Sciences, KIRAMS, Seoul (Korea, Republic of); Cho, Doo Wan; Lee, Hong Soo; Han, Su Cheol [Jeonbuk Department of Inhalation Research, Korea Institute of toxicology, KRICT, Jeongeup (Korea, Republic of)

    2016-12-15

    These absorbed dose can calculated using the Monte Carlo transport code MCNP (Monte Carlo N-particle transport code). Internal radiotherapy absorbed dose was calculated using conventional software, such as OLINDA/EXM or Monte Carlo simulation. However, the OLINDA/EXM does not calculate individual absorbed dose and non-standard organ, such as tumor. While the Monte Carlo simulation can calculated non-standard organ and specific absorbed dose using individual CT image. External radiotherapy, absorbed dose can calculated by specific absorbed energy in specific organs using Monte Carlo simulation. The specific absorbed energy in each organ was difference between species or even if the same species. Since they have difference organ sizes, position, and density of organs. The aim of this study was to individually evaluated cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation. We evaluation of cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation. The absorbed energy in each organ compared with mouse heart was 54.6 fold higher than monkey absorbed energy in heart. Likewise lung was 88.4, liver was 16.0, urinary bladder was 29.4 fold higher than monkey. It means that the distance of each organs and organ mass was effects of the absorbed energy. This result may help to can calculated absorbed dose and more accuracy plan for external radiation beam therapy and internal radiotherapy.

  12. Evaluation of cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Woo, Sang Keun; Kim, Wook; Park, Yong Sung; Kang, Joo Hyun; Lee, Yong Jin; Cho, Doo Wan; Lee, Hong Soo; Han, Su Cheol

    2016-01-01

    These absorbed dose can calculated using the Monte Carlo transport code MCNP (Monte Carlo N-particle transport code). Internal radiotherapy absorbed dose was calculated using conventional software, such as OLINDA/EXM or Monte Carlo simulation. However, the OLINDA/EXM does not calculate individual absorbed dose and non-standard organ, such as tumor. While the Monte Carlo simulation can calculated non-standard organ and specific absorbed dose using individual CT image. External radiotherapy, absorbed dose can calculated by specific absorbed energy in specific organs using Monte Carlo simulation. The specific absorbed energy in each organ was difference between species or even if the same species. Since they have difference organ sizes, position, and density of organs. The aim of this study was to individually evaluated cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation. We evaluation of cobalt-60 energy deposit in mouse and monkey using Monte Carlo simulation. The absorbed energy in each organ compared with mouse heart was 54.6 fold higher than monkey absorbed energy in heart. Likewise lung was 88.4, liver was 16.0, urinary bladder was 29.4 fold higher than monkey. It means that the distance of each organs and organ mass was effects of the absorbed energy. This result may help to can calculated absorbed dose and more accuracy plan for external radiation beam therapy and internal radiotherapy.

  13. Competing Quantum Hall Phases in the Second Landau Level in Low Density Limit

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Wei [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Serafin, A. [Univ. of Florida, Gainesville, FL (United States). National High Magnetic Field Lab. (MagLab); Xia, J. S. [Univ. of Florida, Gainesville, FL (United States). National High Magnetic Field Lab. (MagLab); Liang, Y. [Univ. of Florida, Gainesville, FL (United States). National High Magnetic Field Lab. (MagLab); Sullivan, N. S. [Univ. of Florida, Gainesville, FL (United States). National High Magnetic Field Lab. (MagLab); Baldwin, K. W. [Princeton Univ., NJ (United States); West, K. W. [Princeton Univ., NJ (United States); Pfeiffer, L. N. [Princeton Univ., NJ (United States); Tsui, D. C. [Princeton Univ., NJ (United States)

    2015-01-01

    Up to date, studies of the fractional quantum Hall effect (FQHE) states in the second Landau level have mainly been carried out in the high electron density regime, where the electron mobility is the highest. Only recently, with the advance of high quality low density MBE growth, experiments have been pushed to the low density regime [1], where the electron-electron interactions are strong and the Landau level mixing parameter, defined by κ = e2/εIB/ℏωe, is large. Here, lB = (ℏe/B)1/2 is the magnetic length and ωc = eB/m the cyclotron frequency. All other parameters have their normal meanings. It has been shown that a large Landau level mixing effect strongly affects the electron physics in the second Landau level [2].

  14. Performance of quantum Monte Carlo for calculating molecular bond lengths

    Energy Technology Data Exchange (ETDEWEB)

    Cleland, Deidre M., E-mail: deidre.cleland@csiro.au; Per, Manolo C., E-mail: manolo.per@csiro.au [CSIRO Virtual Nanoscience Laboratory, 343 Royal Parade, Parkville, Victoria 3052 (Australia)

    2016-03-28

    This work investigates the accuracy of real-space quantum Monte Carlo (QMC) methods for calculating molecular geometries. We present the equilibrium bond lengths of a test set of 30 diatomic molecules calculated using variational Monte Carlo (VMC) and diffusion Monte Carlo (DMC) methods. The effect of different trial wavefunctions is investigated using single determinants constructed from Hartree-Fock (HF) and Density Functional Theory (DFT) orbitals with LDA, PBE, and B3LYP functionals, as well as small multi-configurational self-consistent field (MCSCF) multi-determinant expansions. When compared to experimental geometries, all DMC methods exhibit smaller mean-absolute deviations (MADs) than those given by HF, DFT, and MCSCF. The most accurate MAD of 3 ± 2 × 10{sup −3} Å is achieved using DMC with a small multi-determinant expansion. However, the more computationally efficient multi-determinant VMC method has a similar MAD of only 4.0 ± 0.9 × 10{sup −3} Å, suggesting that QMC forces calculated from the relatively simple VMC algorithm may often be sufficient for accurate molecular geometries.

  15. Charged-particle thermonuclear reaction rates: I. Monte Carlo method and statistical distributions

    International Nuclear Information System (INIS)

    Longland, R.; Iliadis, C.; Champagne, A.E.; Newton, J.R.; Ugalde, C.; Coc, A.; Fitzgerald, R.

    2010-01-01

    A method based on Monte Carlo techniques is presented for evaluating thermonuclear reaction rates. We begin by reviewing commonly applied procedures and point out that reaction rates that have been reported up to now in the literature have no rigorous statistical meaning. Subsequently, we associate each nuclear physics quantity entering in the calculation of reaction rates with a specific probability density function, including Gaussian, lognormal and chi-squared distributions. Based on these probability density functions the total reaction rate is randomly sampled many times until the required statistical precision is achieved. This procedure results in a median (Monte Carlo) rate which agrees under certain conditions with the commonly reported recommended 'classical' rate. In addition, we present at each temperature a low rate and a high rate, corresponding to the 0.16 and 0.84 quantiles of the cumulative reaction rate distribution. These quantities are in general different from the statistically meaningless 'minimum' (or 'lower limit') and 'maximum' (or 'upper limit') reaction rates which are commonly reported. Furthermore, we approximate the output reaction rate probability density function by a lognormal distribution and present, at each temperature, the lognormal parameters μ and σ. The values of these quantities will be crucial for future Monte Carlo nucleosynthesis studies. Our new reaction rates, appropriate for bare nuclei in the laboratory, are tabulated in the second paper of this issue (Paper II). The nuclear physics input used to derive our reaction rates is presented in the third paper of this issue (Paper III). In the fourth paper of this issue (Paper IV) we compare our new reaction rates to previous results.

  16. Estimating the Partition Function Zeros by Using the Wang-Landau Monte Carlo Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung-Yeon [Korea National University of Transportation, Chungju (Korea, Republic of)

    2017-03-15

    The concept of the partition function zeros is one of the most efficient methods for investigating the phase transitions and the critical phenomena in various physical systems. Estimating the partition function zeros requires information on the density of states Ω(E) as a function of the energy E. Currently, the Wang-Landau Monte Carlo algorithm is one of the best methods for calculating Ω(E). The partition function zeros in the complex temperature plane of the Ising model on an L × L square lattice (L = 10 ∼ 80) with a periodic boundary condition have been estimated by using the Wang-Landau Monte Carlo algorithm. The efficiency of the Wang-Landau Monte Carlo algorithm and the accuracies of the partition function zeros have been evaluated for three different, 5%, 10%, and 20%, flatness criteria for the histogram H(E).

  17. Dynamic bounds coupled with Monte Carlo simulations

    Energy Technology Data Exchange (ETDEWEB)

    Rajabalinejad, M., E-mail: M.Rajabalinejad@tudelft.n [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands); Meester, L.E. [Delft Institute of Applied Mathematics, Delft University of Technology, Delft (Netherlands); Gelder, P.H.A.J.M. van; Vrijling, J.K. [Faculty of Civil Engineering, Delft University of Technology, Delft (Netherlands)

    2011-02-15

    For the reliability analysis of engineering structures a variety of methods is known, of which Monte Carlo (MC) simulation is widely considered to be among the most robust and most generally applicable. To reduce simulation cost of the MC method, variance reduction methods are applied. This paper describes a method to reduce the simulation cost even further, while retaining the accuracy of Monte Carlo, by taking into account widely present monotonicity. For models exhibiting monotonic (decreasing or increasing) behavior, dynamic bounds (DB) are defined, which in a coupled Monte Carlo simulation are updated dynamically, resulting in a failure probability estimate, as well as a strict (non-probabilistic) upper and lower bounds. Accurate results are obtained at a much lower cost than an equivalent ordinary Monte Carlo simulation. In a two-dimensional and a four-dimensional numerical example, the cost reduction factors are 130 and 9, respectively, where the relative error is smaller than 5%. At higher accuracy levels, this factor increases, though this effect is expected to be smaller with increasing dimension. To show the application of DB method to real world problems, it is applied to a complex finite element model of a flood wall in New Orleans.

  18. Supersonic flow with shock waves. Monte-Carlo calculations for low density plasma. I; Flujo supersonico de un plasma con ondas de choque, un metodo de montecarlo para plasmas de baja densidad, I.

    Energy Technology Data Exchange (ETDEWEB)

    Almenara, E; Hidalgo, M; Saviron, J M

    1980-07-01

    This Report gives preliminary information about a Monte Carlo procedure to simulate supersonic flow past a body of a low density plasma in the transition regime. A computer program has been written for a UNIVAC 1108 machine to account for a plasma composed by neutral molecules and positive and negative ions. Different and rather general body geometries can be analyzed. Special attention is played to tho detached shock waves growth In front of the body. (Author) 30 refs.

  19. Referent 3D tumor model at cellular level in radionuclide therapy

    International Nuclear Information System (INIS)

    Spaic, R.; Ilic, R.D.; Petrovic, B.J.

    2002-01-01

    Aim Conventional internal dosimetry has a lot of limitations because of tumor dose nonuniformity. The best approach for absorbed dose at cellular level for different tumors in radionuclide therapy calculation is Monte Carlo method. The purpose of this study is to introduce referent tumor 3D model at cellular level for Monte Carlo simulation study in radionuclide therapy. Material and Methods The moment when tumor is detectable and when same therapy can start is time period in which referent 3D tumor model at cellular level was defined. In accordance with tumor growth rate at that moment he was a sphere with same radius (10 000 μm). In that tumor there are cells or cluster of cells, which are randomly distributed spheres. Distribution of cells/cluster of cells can be calculated from histology data but it was assumed that this distribution is normal with the same mean value and standard deviation (100±50 mm). Second parameter, which was selected to define referent tumor, is volume density of cells (30%). In this referent tumor there are no necroses. Stroma is defined as space between spheres with same concentration of materials as in spheres. Results: Referent tumor defined on this way have about 2,2 10 5 cells or cluster of cells random distributed. Using this referent 3D tumor model and for same concentration of radionuclides (1:100) and energy of beta emitters (1000 keV) which are homogeneously distributed in labeled cells absorbed dose for all cells was calculated. Simulations are done using FOTELP Monte Carlo code, which is modified for this purposes. Results of absorbed dose in cells are given in numerical values (1D distribution) and as the images (2D or 3D distributions). Conclusion Geometrical module for Monte Carlo simulation study can be standardized by introducing referent 3D tumor model at cellular level. This referent 3D tumor model gives most realistic presentation of different tumors at the moment of their detectability. Referent 3D tumor model at

  20. Thin film growth behaviors on strained fcc(111) surface by kinetic Monte Carlo

    International Nuclear Information System (INIS)

    Doi, Y; Matsunaka, D; Shibutani, Y

    2009-01-01

    We study Ag islands grown on strained Ag(111) surfaces using kinetic Monte Carlo (KMC) simulations. We employed KMC parameters of activation energy and attempt frequency estimated by nudged elastic band (NEB) method and vibration analyses. We investigate influences of surface strain and substrate temperature on film growth. As the biaxial surface strain increases, the island density increases. As temperature increases, the shape of the island changes from dendric to hexagonal and the island density increases.

  1. Speed-up of ab initio hybrid Monte Carlo and ab initio path integral hybrid Monte Carlo simulations by using an auxiliary potential energy surface

    International Nuclear Information System (INIS)

    Nakayama, Akira; Taketsugu, Tetsuya; Shiga, Motoyuki

    2009-01-01

    Efficiency of the ab initio hybrid Monte Carlo and ab initio path integral hybrid Monte Carlo methods is enhanced by employing an auxiliary potential energy surface that is used to update the system configuration via molecular dynamics scheme. As a simple illustration of this method, a dual-level approach is introduced where potential energy gradients are evaluated by computationally less expensive ab initio electronic structure methods. (author)

  2. Exploring Monte Carlo methods

    CERN Document Server

    Dunn, William L

    2012-01-01

    Exploring Monte Carlo Methods is a basic text that describes the numerical methods that have come to be known as "Monte Carlo." The book treats the subject generically through the first eight chapters and, thus, should be of use to anyone who wants to learn to use Monte Carlo. The next two chapters focus on applications in nuclear engineering, which are illustrative of uses in other fields. Five appendices are included, which provide useful information on probability distributions, general-purpose Monte Carlo codes for radiation transport, and other matters. The famous "Buffon's needle proble

  3. Specialized Monte Carlo codes versus general-purpose Monte Carlo codes

    International Nuclear Information System (INIS)

    Moskvin, Vadim; DesRosiers, Colleen; Papiez, Lech; Lu, Xiaoyi

    2002-01-01

    The possibilities of Monte Carlo modeling for dose calculations and optimization treatment are quite limited in radiation oncology applications. The main reason is that the Monte Carlo technique for dose calculations is time consuming while treatment planning may require hundreds of possible cases of dose simulations to be evaluated for dose optimization. The second reason is that general-purpose codes widely used in practice, require an experienced user to customize them for calculations. This paper discusses the concept of Monte Carlo code design that can avoid the main problems that are preventing wide spread use of this simulation technique in medical physics. (authors)

  4. Modeling the cathode region of noble gas mixture discharges using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Donko, Z.; Janossy, M.

    1992-10-01

    A model of the cathode dark space of DC glow discharges was developed in order to study the effects caused by mixing small amounts (≤2%) of other noble gases (Ne, Ar, Kr and Xe) to He. The motion of charged particles was described by Monte Carlo simulation. Several discharge parameters (electron and ion energy distribution functions, electron and ion current densities, reduced ionization coefficients, and current density-voltage characteristics) were obtained. Small amounts of admixtures were found to modify significantly the discharge parameters. Current density-voltage characteristics obtained from the model showed good agreement with experimental data. (author) 40 refs.; 14 figs

  5. Monte Carlo principles and applications

    Energy Technology Data Exchange (ETDEWEB)

    Raeside, D E [Oklahoma Univ., Oklahoma City (USA). Health Sciences Center

    1976-03-01

    The principles underlying the use of Monte Carlo methods are explained, for readers who may not be familiar with the approach. The generation of random numbers is discussed, and the connection between Monte Carlo methods and random numbers is indicated. Outlines of two well established Monte Carlo sampling techniques are given, together with examples illustrating their use. The general techniques for improving the efficiency of Monte Carlo calculations are considered. The literature relevant to the applications of Monte Carlo calculations in medical physics is reviewed.

  6. Bayesian Monte Carlo method

    International Nuclear Information System (INIS)

    Rajabalinejad, M.

    2010-01-01

    To reduce cost of Monte Carlo (MC) simulations for time-consuming processes, Bayesian Monte Carlo (BMC) is introduced in this paper. The BMC method reduces number of realizations in MC according to the desired accuracy level. BMC also provides a possibility of considering more priors. In other words, different priors can be integrated into one model by using BMC to further reduce cost of simulations. This study suggests speeding up the simulation process by considering the logical dependence of neighboring points as prior information. This information is used in the BMC method to produce a predictive tool through the simulation process. The general methodology and algorithm of BMC method are presented in this paper. The BMC method is applied to the simplified break water model as well as the finite element model of 17th Street Canal in New Orleans, and the results are compared with the MC and Dynamic Bounds methods.

  7. Multilevel Monte Carlo Approaches for Numerical Homogenization

    KAUST Repository

    Efendiev, Yalchin R.

    2015-10-01

    In this article, we study the application of multilevel Monte Carlo (MLMC) approaches to numerical random homogenization. Our objective is to compute the expectation of some functionals of the homogenized coefficients, or of the homogenized solutions. This is accomplished within MLMC by considering different sizes of representative volumes (RVEs). Many inexpensive computations with the smallest RVE size are combined with fewer expensive computations performed on larger RVEs. Likewise, when it comes to homogenized solutions, different levels of coarse-grid meshes are used to solve the homogenized equation. We show that, by carefully selecting the number of realizations at each level, we can achieve a speed-up in the computations in comparison to a standard Monte Carlo method. Numerical results are presented for both one-dimensional and two-dimensional test-cases that illustrate the efficiency of the approach.

  8. Influence of primary-particle density in the morphology of agglomerates.

    Science.gov (United States)

    Camejo, M D; Espeso, D R; Bonilla, L L

    2014-07-01

    Agglomeration processes occur in many different realms of science, such as colloid and aerosol formation or formation of bacterial colonies. We study the influence of primary-particle density in agglomerate structures using diffusion-controlled Monte Carlo simulations with realistic space scales through different regimes (diffusion-limited aggregation and diffusion-limited colloid aggregation). The equivalence of Monte Carlo time steps to real time scales is given by Hirsch's hydrodynamical theory of Brownian motion. Agglomerate behavior at different time stages of the simulations suggests that three indices (the fractal exponent, the coordination number, and the eccentricity index) characterize agglomerate geometry. Using these indices, we have found that the initial density of primary particles greatly influences the final structure of the agglomerate, as observed in recent experimental works.

  9. Global Monte Carlo Simulation with High Order Polynomial Expansions

    International Nuclear Information System (INIS)

    William R. Martin; James Paul Holloway; Kaushik Banerjee; Jesse Cheatham; Jeremy Conlin

    2007-01-01

    The functional expansion technique (FET) was recently developed for Monte Carlo simulation. The basic idea of the FET is to expand a Monte Carlo tally in terms of a high order expansion, the coefficients of which can be estimated via the usual random walk process in a conventional Monte Carlo code. If the expansion basis is chosen carefully, the lowest order coefficient is simply the conventional histogram tally, corresponding to a flat mode. This research project studied the applicability of using the FET to estimate the fission source, from which fission sites can be sampled for the next generation. The idea is that individual fission sites contribute to expansion modes that may span the geometry being considered, possibly increasing the communication across a loosely coupled system and thereby improving convergence over the conventional fission bank approach used in most production Monte Carlo codes. The project examined a number of basis functions, including global Legendre polynomials as well as 'local' piecewise polynomials such as finite element hat functions and higher order versions. The global FET showed an improvement in convergence over the conventional fission bank approach. The local FET methods showed some advantages versus global polynomials in handling geometries with discontinuous material properties. The conventional finite element hat functions had the disadvantage that the expansion coefficients could not be estimated directly but had to be obtained by solving a linear system whose matrix elements were estimated. An alternative fission matrix-based response matrix algorithm was formulated. Studies were made of two alternative applications of the FET, one based on the kernel density estimator and one based on Arnoldi's method of minimized iterations. Preliminary results for both methods indicate improvements in fission source convergence. These developments indicate that the FET has promise for speeding up Monte Carlo fission source convergence

  10. 11th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing

    CERN Document Server

    Nuyens, Dirk

    2016-01-01

    This book presents the refereed proceedings of the Eleventh International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing that was held at the University of Leuven (Belgium) in April 2014. These biennial conferences are major events for Monte Carlo and quasi-Monte Carlo researchers. The proceedings include articles based on invited lectures as well as carefully selected contributed papers on all theoretical aspects and applications of Monte Carlo and quasi-Monte Carlo methods. Offering information on the latest developments in these very active areas, this book is an excellent reference resource for theoreticians and practitioners interested in solving high-dimensional computational problems, arising, in particular, in finance, statistics and computer graphics.

  11. MCT: a Monte Carlo code for time-dependent neutron thermalization problems

    International Nuclear Information System (INIS)

    Cupini, E.; Simonini, R.

    1974-01-01

    In the Monte Carlo simulation of pulse source experiments, the neutron energy spectrum, spatial distribution and total density may be required for a long time after the pulse. If the assemblies are very small, as often occurs in the cases of interest, sophisticated Monte Carlo techniques must be applied which force neutrons to remain in the system during the time interval investigated. In the MCT code a splitting technique has been applied to neutrons exceeding assigned target times, and we have found that this technique compares very favorably with more usual ones, such as the expected leakage probability, giving large gains in computational time and variance. As an example, satisfactory asymptotic thermal spectra with a neutron attenuation of 10 -5 were quickly obtained. (U.S.)

  12. Chain segmentation for the Monte Carlo solution of particle transport problems

    International Nuclear Information System (INIS)

    Ragheb, M.M.H.

    1984-01-01

    A Monte Carlo approach is proposed where the random walk chains generated in particle transport simulations are segmented. Forward and adjoint-mode estimators are then used in conjunction with the firstevent source density on the segmented chains to obtain multiple estimates of the individual terms of the Neumann series solution at each collision point. The solution is then constructed by summation of the series. The approach is compared to the exact analytical and to the Monte Carlo nonabsorption weighting method results for two representative slowing down and deep penetration problems. Application of the proposed approach leads to unbiased estimates for limited numbers of particle simulations and is useful in suppressing an effective bias problem observed in some cases of deep penetration particle transport problems

  13. Partial level density of the n-quasiparticle excitations in the nuclei of the 40≤A≤200 region

    International Nuclear Information System (INIS)

    Sukhovoj, A.M.; Khitrov, V.A.

    2005-01-01

    Level density and radiative strength functions are obtained from the analysis of two-step cascades intensities following the thermal neutron capture. The data on level density are approximated by the sum of the partial level densities corresponding to n-quasiparticle excitations. The most probable values of the collective enhancement factor of the level density are found together with the thresholds of the next Cooper nucleons pair breaking. These data allow one to calculate the level density of practically any nucleus in given spin window in the framework of model concepts, taking into account all known nuclear excitation types. The presence of an approximation results discrepancy with theoretical statements specifies the necessity of rather essentially developing the level density models. It also indicates the possibilities to obtain the essentially new information on nucleon correlation functions of the excited nucleus from the experiment

  14. Ab initio molecular dynamics simulation of liquid water by quantum Monte Carlo

    International Nuclear Information System (INIS)

    Zen, Andrea; Luo, Ye; Mazzola, Guglielmo; Sorella, Sandro; Guidoni, Leonardo

    2015-01-01

    Although liquid water is ubiquitous in chemical reactions at roots of life and climate on the earth, the prediction of its properties by high-level ab initio molecular dynamics simulations still represents a formidable task for quantum chemistry. In this article, we present a room temperature simulation of liquid water based on the potential energy surface obtained by a many-body wave function through quantum Monte Carlo (QMC) methods. The simulated properties are in good agreement with recent neutron scattering and X-ray experiments, particularly concerning the position of the oxygen-oxygen peak in the radial distribution function, at variance of previous density functional theory attempts. Given the excellent performances of QMC on large scale supercomputers, this work opens new perspectives for predictive and reliable ab initio simulations of complex chemical systems

  15. Monte Carlo calculations of triton and 4He nuclei with the Reid potential

    International Nuclear Information System (INIS)

    Lomnitz-Adler, J.; Pandharipande, V.R.; Smith, R.A.

    1981-01-01

    A Monte Carlo method is developed to calculate the binding energy and density distribution of the 3 H and 4 H nuclei for a variational wave function written as a symmetrized product of correlation operators. The upper bounds obtained with the Reid potential are -6.86 +- 0.08 and -22.9 +- 0.5 MeV respectively. The Coulomb interaction in 4 H is ignored. The calculated density distributions have reasonable radii, but they do not show any dip at the center. (orig.)

  16. A study of the dosimetry of small field photon beams used in intensity modulated radiation therapy in inhomogeneous media: Monte Carlo simulations, and algorithm comparisons and corrections

    International Nuclear Information System (INIS)

    Jones, Andrew Osler

    2004-01-01

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however, their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs farther downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. The dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the

  17. Monte-Carlo study of ICRF-sustained mode operation in tandem mirrors

    Energy Technology Data Exchange (ETDEWEB)

    Todd, A.M.M. (Grumman Aerospace Corp., Princeton, NJ (USA))

    1984-09-01

    A study, using a Monte-Carlo simulation code, of ICRF-sustained mode operation in tandem mirrors by way of ICRF end-cell fuelling and heating is described. Although the basic parameter space considered corresponds to the Phaedrus experiment, the central-cell density and temperatures are extended towards the reactor regime. It is found that significant end cell ion potential barriers can be generated with ICRF, but that, owing to choking of the central-cell ion source stream by the plugging potential, saturation occurs and power requirements rapidly increase, so that the potential rise is limited to about twice the central-cell ion temperature. Although performance is improved as the ion cyclotron resonance approaches the end-cell mid-plane, no significant difference is found between inboard, outboard or double resonance location. As the central-cell density and temperatures are increased, the RF power requirement is found to increase dramatically. Optimum performance for end cell fuelling results when the central-cell electron temperature is higher than the ion temperature, but the magnitude of this ratio is limited by an increase in threshold power level with electron temperature.

  18. Effects of shape differences in the level densities of three formalisms on calculated cross-sections

    International Nuclear Information System (INIS)

    Fu, C.Y.; Larson, D.C.

    1998-01-01

    Effects of shape differences in the level densities of three formalisms on calculated cross-sections and particle emission spectra are described. Reactions for incident neutrons up to 20 MeV on 58 Ni are chosen for illustrations. Level density parameters for one of the formalisms are determined from the available neutron resonance data for one residual nuclide in the binary channels and from fitting the measured (n,n'), (n,p) and (n,α) cross-sections for the other two residual nuclides. Level density parameters for the other two formalisms are determined such that they yield the same values as the above one at two selected energies. This procedure forces the level densities from the three formalisms used for the binary pat of the calculation to be as close as possible. The remaining differences are in their energy dependences (shapes). It is shown that these shape differences alone are enough to cause the calculated cross-sections and particle emission spectra to be different by up to 60%. (author)

  19. Toward a Monte Carlo program for simulating vapor-liquid phase equilibria from first principles

    Energy Technology Data Exchange (ETDEWEB)

    McGrath, M; Siepmann, J I; Kuo, I W; Mundy, C J; Vandevondele, J; Sprik, M; Hutter, J; Mohamed, F; Krack, M; Parrinello, M

    2004-10-20

    Efficient Monte Carlo algorithms are combined with the Quickstep energy routines of CP2K to develop a program that allows for Monte Carlo simulations in the canonical, isobaric-isothermal, and Gibbs ensembles using a first principles description of the physical system. Configurational-bias Monte Carlo techniques and pre-biasing using an inexpensive approximate potential are employed to increase the sampling efficiency and to reduce the frequency of expensive ab initio energy evaluations. The new Monte Carlo program has been validated through extensive comparison with molecular dynamics simulations using the programs CPMD and CP2K. Preliminary results for the vapor-liquid coexistence properties (T = 473 K) of water using the Becke-Lee-Yang-Parr exchange and correlation energy functionals, a triple-zeta valence basis set augmented with two sets of d-type or p-type polarization functions, and Goedecker-Teter-Hutter pseudopotentials are presented. The preliminary results indicate that this description of water leads to an underestimation of the saturated liquid density and heat of vaporization and, correspondingly, an overestimation of the saturated vapor pressure.

  20. Spin-dependent level density in interacting Boson-Fermion-Fermion model of the Odd-Odd Nucleus 196Au

    International Nuclear Information System (INIS)

    Kabashi, S.; Bekteshi, S.; Ahmetaj, S.; Shaqiri, Z.

    2009-01-01

    The level density of the odd-odd nucleus 196 Au is investigated in the interacting boson-fermion-fermion model (IBFFM) which accounts for collectivity and complex interaction between quasiparticle and collective modes.The IBFFM spin-dependent level densities show high-spin reduction with respect to Bethe formula.This can be well accounted for by a modified spin-dependent level density formula. (authors)

  1. Evaluation of the effect of hemoglobin or hematocrit level on dural sinus density using unenhanced computed tomography.

    Science.gov (United States)

    Lee, Seung Young; Cha, Sang-Hoon; Lee, Sung-Hyun; Shin, Dong-Ick

    2013-01-01

    To identify the relationship between hemoglobin (Hgb) or hematocrit (Hct) level and dural sinus density using unenhanced computed tomography (UECT). Patients who were performed UECT and had records of a complete blood count within 24 hours from UECT were included (n=122). We measured the Hounsfield unit (HU) of the dural sinus at the right sigmoid sinus, left sigmoid sinus and 2 points of the superior sagittal sinus. Quantitative measurement of dural sinus density using the circle regions of interest (ROI) method was calculated as average ROI values at 3 or 4 points. Simple regression analysis was used to evaluate the correlation between mean HU and Hgb or mean HU and Hct. The mean densities of the dural sinuses ranged from 24.67 to 53.67 HU (mean, 43.28 HU). There was a strong correlation between mean density and Hgb level (r=0.832) and between mean density and Hct level (r=0.840). Dural sinus density on UECT is closely related to Hgb and Hct levels. Therefore, the Hgb or Hct levels can be used to determine whether the dural sinus density is within the normal range or pathological conditions such as venous thrombosis.

  2. Dosimetric Properties of Plasma Density Effects on Laser-Accelerated VHEE Beams Using a Sharp Density-Transition Scheme

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Seung Hoon; Cho, Sungho; Kim, Eun Ho; Park, Jeong Hoon; Jung, Won-Gyun; Kim, Geun Beom; Kim, Kum Bae [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of); Min, Byung Jun [Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Kim, Jaehoon [Korea Electrotechnology Research Institute, Ansan (Korea, Republic of); Jeong, Hojin [Gyeongsang National University Hospital, Jinju (Korea, Republic of); Lee, Kitae [Korea Atomic Energy Research Institute, Deajeon (Korea, Republic of); Park, Sung Yong [Karmanos Cancer Institute, Michigan (United States)

    2017-01-15

    In this paper, the effects of the plasma density on laser-accelerated electron beams for radiation therapy with a sharp density transition are investigated. In the sharp density-transition scheme for electron injection, the crucial issue is finding the optimum density conditions under which electrons injected only during the first period of the laser wake wave are accelerated further. In this paper, we report particle-in-cell simulation results for the effects of both the scale length and the density transition ratio on the generation of a quasi-mono-energetic electron bunch. The effects of both the transverse parabolic channel and the plasma length on the electron-beam's quality are investigated. Also, we show the experimental results for the feasibility of a sharp density-transition structure. The dosimetric properties of these very high-energy electron beams are calculated using Monte Carlo simulations.

  3. PERHITUNGAN VaR PORTOFOLIO SAHAM MENGGUNAKAN DATA HISTORIS DAN DATA SIMULASI MONTE CARLO

    Directory of Open Access Journals (Sweden)

    WAYAN ARTHINI

    2012-09-01

    Full Text Available Value at Risk (VaR is the maximum potential loss on a portfolio based on the probability at a certain time.  In this research, portfolio VaR values calculated from historical data and Monte Carlo simulation data. Historical data is processed so as to obtain stock returns, variance, correlation coefficient, and variance-covariance matrix, then the method of Markowitz sought proportion of each stock fund, and portfolio risk and return portfolio. The data was then simulated by Monte Carlo simulation, Exact Monte Carlo Simulation and Expected Monte Carlo Simulation. Exact Monte Carlo simulation have same returns and standard deviation  with historical data, while the Expected Monte Carlo Simulation satistic calculation similar to historical data. The results of this research is the portfolio VaR  with time horizon T=1, T=10, T=22 and the confidence level of 95 %, values obtained VaR between historical data and Monte Carlo simulation data with the method exact and expected. Value of VaR from both Monte Carlo simulation is greater than VaR historical data.

  4. Quantum phase transitions and collective enhancement of level density in odd–A and odd–odd nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Karampagia, S., E-mail: karampag@nscl.msu.edu [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824-1321 (United States); Renzaglia, A. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824-1321 (United States); Zelevinsky, V. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824-1321 (United States); Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824-1321 (United States)

    2017-06-15

    The nuclear shell model assumes an effective mean-field plus interaction Hamiltonian in a specific configuration space. We want to understand how various interaction matrix elements affect the observables, the collectivity in nuclei and the nuclear level density for odd–A and odd–odd nuclei. Using the sd and pf shells, we vary specific groups of matrix elements and study the evolution of energy levels, transition rates and the level density. In all cases studied, a transition between a “normal” and a collective phase is induced, accompanied by an enhancement of the level density in the collective phase. In distinction to neighboring even–even nuclei, the enhancement of the level density is observed already at the transition point. The collective phase is reached when the single-particle transfer matrix elements are dominant in the shell model Hamiltonian, providing a sign of their fundamental role.

  5. Simulating measures of wood density through the surface by Compton scattering

    International Nuclear Information System (INIS)

    Penna, Rodrigo; Oliveira, Arno H.; Braga, Mario R.M.S.S.; Vasconcelos, Danilo C.; Carneiro, Clemente J.G.; Penna, Ariane G.C.

    2009-01-01

    Monte Carlo code (MCNP-4C) was used to simulate a nuclear densimeter for measuring wood densities nondestructively. An Americium source (E = 60 keV) and a NaI (Tl) detector were placed on a wood block surface. Results from MCNP shown that scattered photon fluxes may be used to determining wood densities. Linear regressions between scattered photons fluxes and wood density were calculated and shown correlation coefficients near unity. (author)

  6. Finite size scaling and spectral density studies

    International Nuclear Information System (INIS)

    Berg, B.A.

    1991-01-01

    Finite size scaling (FSS) and spectral density (SD) studies are reported for the deconfining phase transition. This talk concentrates on Monte Carlo (MC) results for pure SU(3) gauge theory, obtained in collaboration with Alves and Sanielevici, but the methods are expected to be useful for full QCD as well. (orig.)

  7. Domain Decomposition strategy for pin-wise full-core Monte Carlo depletion calculation with the reactor Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)

    2016-06-15

    Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.

  8. CERN honours Carlo Rubbia

    CERN Document Server

    2009-01-01

    Carlo Rubbia turned 75 on March 31, and CERN held a symposium to mark his birthday and pay tribute to his impressive contribution to both CERN and science. Carlo Rubbia, 4th from right, together with the speakers at the symposium.On 7 April CERN hosted a celebration marking Carlo Rubbia’s 75th birthday and 25 years since he was awarded the Nobel Prize for Physics. "Today we will celebrate 100 years of Carlo Rubbia" joked CERN’s Director-General, Rolf Heuer in his opening speech, "75 years of his age and 25 years of the Nobel Prize." Rubbia received the Nobel Prize along with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. During the symposium, which was held in the Main Auditorium, several eminent speakers gave lectures on areas of science to which Carlo Rubbia made decisive contributions. Among those who spoke were Michel Spiro, Director of the French National Insti...

  9. The Monte Carlo Simulation Method for System Reliability and Risk Analysis

    CERN Document Server

    Zio, Enrico

    2013-01-01

    Monte Carlo simulation is one of the best tools for performing realistic analysis of complex systems as it allows most of the limiting assumptions on system behavior to be relaxed. The Monte Carlo Simulation Method for System Reliability and Risk Analysis comprehensively illustrates the Monte Carlo simulation method and its application to reliability and system engineering. Readers are given a sound understanding of the fundamentals of Monte Carlo sampling and simulation and its application for realistic system modeling.   Whilst many of the topics rely on a high-level understanding of calculus, probability and statistics, simple academic examples will be provided in support to the explanation of the theoretical foundations to facilitate comprehension of the subject matter. Case studies will be introduced to provide the practical value of the most advanced techniques.   This detailed approach makes The Monte Carlo Simulation Method for System Reliability and Risk Analysis a key reference for senior undergra...

  10. Monte Carlo calculation of correction factors for radionuclide neutron source emission rate measurement by manganese bath method

    International Nuclear Information System (INIS)

    Li Chunjuan; Liu Yi'na; Zhang Weihua; Wang Zhiqiang

    2014-01-01

    The manganese bath method for measuring the neutron emission rate of radionuclide sources requires corrections to be made for emitted neutrons which are not captured by manganese nuclei. The Monte Carlo particle transport code MCNP was used to simulate the manganese bath system of the standards for the measurement of neutron source intensity. The correction factors were calculated and the reliability of the model was demonstrated through the key comparison for the radionuclide neutron source emission rate measurements organized by BIPM. The uncertainties in the calculated values were evaluated by considering the sensitivities to the solution density, the density of the radioactive material, the positioning of the source, the radius of the bath, and the interaction cross-sections. A new method for the evaluation of the uncertainties in Monte Carlo calculation was given. (authors)

  11. Diagrammatic Monte Carlo simulations of staggered fermions at finite coupling

    CERN Document Server

    Vairinhos, Helvio

    2016-01-01

    Diagrammatic Monte Carlo has been a very fruitful tool for taming, and in some cases even solving, the sign problem in several lattice models. We have recently proposed a diagrammatic model for simulating lattice gauge theories with staggered fermions at arbitrary coupling, which extends earlier successful efforts to simulate lattice QCD at finite baryon density in the strong-coupling regime. Here we present the first numerical simulations of our model, using worm algorithms.

  12. Monte Carlo study of neutrino acceleration in supernova shocks

    International Nuclear Information System (INIS)

    Kazanas, Demosthenes; Ellison, D.C.; National Aeronautics and Space Administration, Greenbelt, MD

    1981-01-01

    The first order Fermi acceleration mechanism of cosmic rays in shocks may be at work for neutrinos in supernova shocks when the latter are at densities rho>10 13 g cm -3 at which the core material is opaque to neutrinos. A Monte Carlo approach to study this effect is employed and the emerging neutrino power law spectra are presented. The increased energy acquired by the neutrinos may facilitate their detection in supernova explosions and provide information about the physics of collapse

  13. Monte-Carlo study of electron noise in compensated InSb

    International Nuclear Information System (INIS)

    Ašmontas, S; Raguotis, R; Bumelienė, S

    2015-01-01

    The results of Monte Carlo simulations of the electron noise in lightly doped and strongly compensated n-type InSb are presented. The strong electron scattering by ionized impurities is established to change essentially the electron distribution function, spectral density of velocity fluctuations and the dependence of noise temperature on the electric field strength. It is found that the electron noise temperature in strongly compensated InSb with low electron density at liquid nitrogen temperature is close to the lattice temperature in a wide range of electric field strength in which the electron gas cooling effect takes place. The increase of electron density is shown to weaken the electron gas cooling effect due to more intensive electron–electron collisions stimulating delocalization of electrons from the bottom of the conduction band. A satisfactory agreement between calculations and available experimental data is obtained. (paper)

  14. The integral first collision kernel method for gamma-ray skyshine analysis[Skyshine; Gamma-ray; First collision kernel; Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.-D.; Chui, C.-S.; Jiang, S.-H. E-mail: shjiang@mx.nthu.edu.tw

    2003-12-01

    A simplified method, based on the integral of the first collision kernel, is presented for performing gamma-ray skyshine calculations for the collimated sources. The first collision kernels were calculated in air for a reference air density by use of the EGS4 Monte Carlo code. These kernels can be applied to other air densities by applying density corrections. The integral first collision kernel (IFCK) method has been used to calculate two of the ANSI/ANS skyshine benchmark problems and the results were compared with a number of other commonly used codes. Our results were generally in good agreement with others but only spend a small fraction of the computation time required by the Monte Carlo calculations. The scheme of the IFCK method for dealing with lots of source collimation geometry is also presented in this study.

  15. Impact of measurement approach on the quality of gamma scanning density profile in a tray type lab-scale column

    International Nuclear Information System (INIS)

    Shahabinejad, H.; Feghhi, S.A.H.; Khorsandi, M.

    2014-01-01

    This article presents a study for investigating impact of the measurement approach on the quality of gamma scanning density profile in tray type columns using experimental and computational evaluations. Experimental density profiles from the total and the photopeak count measurements, as two approaches in gamma ray column scanning technique, has been compared with the computational density profile from Monte Carlo simulation results. We used a laboratory distillation column of 51 cm diameter as an illustrative example for this investigation. 137 Cs was used as a gamma ray source with the activity of 296 MBq (8 mCi), with a NaI(Tl) detector. MCNP4C Monte Carlo code has been used for simulations. The quality of the density profile in the photopeak count approach is relatively within 155–204% better than that of the total count approach for experimental results. The same comparison for simulation results leads to a relative difference within 100–135% for the density profile. - Highlights: • The quality of density profile in gamma scanning technique has been studied. • Quality of density profile depends on the measurement approach. • A laboratory distillation column has been used as an illustrative example. • MCNP4C Monte Carlo code has been used for simulations

  16. Application of Monte Carlo codes to neutron dosimetry

    International Nuclear Information System (INIS)

    Prevo, C.T.

    1982-01-01

    In neutron dosimetry, calculations enable one to predict the response of a proposed dosimeter before effort is expended to design and fabricate the neutron instrument or dosimeter. The nature of these calculations requires the use of computer programs that implement mathematical models representing the transport of radiation through attenuating media. Numerical, and in some cases analytical, solutions of these models can be obtained by one of several calculational techniques. All of these techniques are either approximate solutions to the well-known Boltzmann equation or are based on kernels obtained from solutions to the equation. The Boltzmann equation is a precise mathematical description of neutron behavior in terms of position, energy, direction, and time. The solution of the transport equation represents the average value of the particle flux density. Integral forms of the transport equation are generally regarded as the formal basis for the Monte Carlo method, the results of which can in principle be made to approach the exact solution. This paper focuses on the Monte Carlo technique

  17. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-03-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration via a dynamically weighted estimator by calling some results from the literature of nonhomogeneous Markov chains. Our numerical results indicate that SAMC can yield significant savings over conventional Monte Carlo algorithms, such as the Metropolis-Hastings algorithm, for the problems for which the energy landscape is rugged. © 2008 Elsevier B.V. All rights reserved.

  18. Direct aperture optimization for IMRT using Monte Carlo generated beamlets

    International Nuclear Information System (INIS)

    Bergman, Alanah M.; Bush, Karl; Milette, Marie-Pierre; Popescu, I. Antoniu; Otto, Karl; Duzenli, Cheryl

    2006-01-01

    This work introduces an EGSnrc-based Monte Carlo (MC) beamlet does distribution matrix into a direct aperture optimization (DAO) algorithm for IMRT inverse planning. The technique is referred to as Monte Carlo-direct aperture optimization (MC-DAO). The goal is to assess if the combination of accurate Monte Carlo tissue inhomogeneity modeling and DAO inverse planning will improve the dose accuracy and treatment efficiency for treatment planning. Several authors have shown that the presence of small fields and/or inhomogeneous materials in IMRT treatment fields can cause dose calculation errors for algorithms that are unable to accurately model electronic disequilibrium. This issue may also affect the IMRT optimization process because the dose calculation algorithm may not properly model difficult geometries such as targets close to low-density regions (lung, air etc.). A clinical linear accelerator head is simulated using BEAMnrc (NRC, Canada). A novel in-house algorithm subdivides the resulting phase space into 2.5x5.0 mm 2 beamlets. Each beamlet is projected onto a patient-specific phantom. The beamlet dose contribution to each voxel in a structure-of-interest is calculated using DOSXYZnrc. The multileaf collimator (MLC) leaf positions are linked to the location of the beamlet does distributions. The MLC shapes are optimized using direct aperture optimization (DAO). A final Monte Carlo calculation with MLC modeling is used to compute the final dose distribution. Monte Carlo simulation can generate accurate beamlet dose distributions for traditionally difficult-to-calculate geometries, particularly for small fields crossing regions of tissue inhomogeneity. The introduction of DAO results in an additional improvement by increasing the treatment delivery efficiency. For the examples presented in this paper the reduction in the total number of monitor units to deliver is ∼33% compared to fluence-based optimization methods

  19. On the use of stochastic approximation Monte Carlo for Monte Carlo integration

    KAUST Repository

    Liang, Faming

    2009-01-01

    The stochastic approximation Monte Carlo (SAMC) algorithm has recently been proposed as a dynamic optimization algorithm in the literature. In this paper, we show in theory that the samples generated by SAMC can be used for Monte Carlo integration

  20. Secretory IgA, albumin level, and bone density as markers of biostimulatory effects of laser radiation

    Science.gov (United States)

    Kucerova, Hana; Dostalova, Tatjana; Himmlova, Lucia; Bartova, Jirina; Mazanek, Jiri

    1998-12-01

    The aim of contribution is to evaluate the effects of low- level laser radiation on healing process after human molars extraction in lower jaw using frequency 5 Hz, 292 Hz and 9000 Hz. Changes in bone density and monitoring of secretory IgA and albumin levels in saliva were used as a marker of biostimulatory effect. Bone density after extraction and 6 month after surgical treatment was examined using the dental digital radiography. Bone healing was followed by osseointegration of bone structure in extraction wound. Changes of bone density, secretory IgA and albumin levels were compared in groups of patients with laser therapy and control group without laser therapy. Differences in levels of the saliva markers (sIgA and albumin) were found to be significant comparing irradiated and non-irradiated groups, as well as comparing groups irradiated by various modulatory frequencies. Density of alveolar bone (histogram) was examined on five slices acquired from every RVG image. Histograms were evaluated with computer program for microscopic image analysis. Differences of density were verified in area of the whole slice. There were no significant differences found between the bone density in irradiated and non irradiated groups perhaps due to our used therapeutical diagram.

  1. Efficient Geometry and Data Handling for Large-Scale Monte Carlo - Thermal-Hydraulics Coupling

    Science.gov (United States)

    Hoogenboom, J. Eduard

    2014-06-01

    Detailed coupling of thermal-hydraulics calculations to Monte Carlo reactor criticality calculations requires each axial layer of each fuel pin to be defined separately in the input to the Monte Carlo code in order to assign to each volume the temperature according to the result of the TH calculation, and if the volume contains coolant, also the density of the coolant. This leads to huge input files for even small systems. In this paper a methodology for dynamical assignment of temperatures with respect to cross section data is demonstrated to overcome this problem. The method is implemented in MCNP5. The method is verified for an infinite lattice with 3x3 BWR-type fuel pins with fuel, cladding and moderator/coolant explicitly modeled. For each pin 60 axial zones are considered with different temperatures and coolant densities. The results of the axial power distribution per fuel pin are compared to a standard MCNP5 run in which all 9x60 cells for fuel, cladding and coolant are explicitly defined and their respective temperatures determined from the TH calculation. Full agreement is obtained. For large-scale application the method is demonstrated for an infinite lattice with 17x17 PWR-type fuel assemblies with 25 rods replaced by guide tubes. Again all geometrical detailed is retained. The method was used in a procedure for coupled Monte Carlo and thermal-hydraulics iterations. Using an optimised iteration technique, convergence was obtained in 11 iteration steps.

  2. Midplane neutral density profiles in the National Spherical Torus Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Stotler, D. P., E-mail: dstotler@pppl.gov; Bell, R. E.; Diallo, A.; LeBlanc, B. P.; Podestà, M.; Roquemore, A. L.; Ross, P. W. [Princeton Plasma Physics Laboratory, Princeton University, P. O. Box 451, Princeton, New Jersey 08543-0451 (United States); Scotti, F. [Lawrence Livermore National Laboratory, Livermore, California 94551 (United States)

    2015-08-15

    Atomic and molecular density data in the outer midplane of NSTX [Ono et al., Nucl. Fusion 40, 557 (2000)] are inferred from tangential camera data via a forward modeling procedure using the DEGAS 2 Monte Carlo neutral transport code. The observed Balmer-β light emission data from 17 shots during the 2010 NSTX campaign display no obvious trends with discharge parameters such as the divertor Balmer-α emission level or edge deuterium ion density. Simulations of 12 time slices in 7 of these discharges produce molecular densities near the vacuum vessel wall of 2–8 × 10{sup 17 }m{sup −3} and atomic densities ranging from 1 to 7 × 10{sup 16 }m{sup −3}; neither has a clear correlation with other parameters. Validation of the technique, begun in an earlier publication, is continued with an assessment of the sensitivity of the simulated camera image and neutral densities to uncertainties in the data input to the model. The simulated camera image is sensitive to the plasma profiles and virtually nothing else. The neutral densities at the vessel wall depend most strongly on the spatial distribution of the source; simulations with a localized neutral source yield densities within a factor of two of the baseline, uniform source, case. The uncertainties in the neutral densities associated with other model inputs and assumptions are ≤50%.

  3. Monte Carlo simulations of ionization potential depression in dense plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Stransky, M., E-mail: stransky@fzu.cz [Department of Radiation and Chemical Physics, Institute of Physics ASCR, Na Slovance 2, 182 21 Prague 8 (Czech Republic)

    2016-01-15

    A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model.

  4. Monte Carlo simulations of ionization potential depression in dense plasmas

    International Nuclear Information System (INIS)

    Stransky, M.

    2016-01-01

    A particle-particle grand canonical Monte Carlo model with Coulomb pair potential interaction was used to simulate modification of ionization potentials by electrostatic microfields. The Barnes-Hut tree algorithm [J. Barnes and P. Hut, Nature 324, 446 (1986)] was used to speed up calculations of electric potential. Atomic levels were approximated to be independent of the microfields as was assumed in the original paper by Ecker and Kröll [Phys. Fluids 6, 62 (1963)]; however, the available levels were limited by the corresponding mean inter-particle distance. The code was tested on hydrogen and dense aluminum plasmas. The amount of depression was up to 50% higher in the Debye-Hückel regime for hydrogen plasmas, in the high density limit, reasonable agreement was found with the Ecker-Kröll model for hydrogen plasmas and with the Stewart-Pyatt model [J. Stewart and K. Pyatt, Jr., Astrophys. J. 144, 1203 (1966)] for aluminum plasmas. Our 3D code is an improvement over the spherically symmetric simplifications of the Ecker-Kröll and Stewart-Pyatt models and is also not limited to high atomic numbers as is the underlying Thomas-Fermi model used in the Stewart-Pyatt model

  5. Effects of Sludge Particle Size and Density on Hanford Waste Processing

    International Nuclear Information System (INIS)

    Poloski, Adam P.; Wells, Beric E.; Mahoney, Lenna A.; Daniel, Richard C.; Tingey, Joel M.; Cooley, Scott K.

    2008-01-01

    The U.S. Department of Energy Office of River Protection's Waste Treatment and Immobilization Plant (WTP) will process and treat radioactive waste that is stored in tanks at the Hanford Site in southeastern Washington State. Piping and pumps have been selected to transport the high-level waste (HLW) slurries in the WTP. Pipeline critical-velocity calculations for these systems require the input of a bounding particle size and density. Various approaches based on statistical analyses have been used in the past to provide an estimate of this bounding size and density. In this paper, representative particle size and density distributions (PSDDs) of Hanford waste insoluble solids have been developed based on a new approach that relates measured particle-size distributions (PSDs) to solid-phase compounds. This work was achieved through extensive review of available Hanford waste PSDs and solid-phase compound data. Composite PSDs representing the waste in up to 19 Hanford waste tanks were developed, and the insoluble solid-phase compounds for the 177 Hanford waste tanks, their relative fractions, crystal densities, and particle size and shape were developed. With such a large combination of particle sizes and particle densities, a Monte Carlo simulation approach was used to model the PSDDs. Further detail was added by including an agglomeration of these compounds where the agglomerate density was modeled with a fractal dimension relation. The Monte Carlo simulations were constrained to hold the following relationships: (1) the composite PSDs are reproduced, (2) the solid-phase compound mass fractions are reproduced, (3) the expected in situ bulk-solids density is qualitatively reproduced, and (4) a representative fraction of the sludge volume comprising agglomerates is qualitatively reproduced to typical Hanford waste values. Four PSDDs were developed and evaluated. These four PSDD scenarios correspond to permutations where the master PSD was sonicated or not

  6. Vectorized Monte Carlo

    International Nuclear Information System (INIS)

    Brown, F.B.

    1981-01-01

    Examination of the global algorithms and local kernels of conventional general-purpose Monte Carlo codes shows that multigroup Monte Carlo methods have sufficient structure to permit efficient vectorization. A structured multigroup Monte Carlo algorithm for vector computers is developed in which many particle events are treated at once on a cell-by-cell basis. Vectorization of kernels for tracking and variance reduction is described, and a new method for discrete sampling is developed to facilitate the vectorization of collision analysis. To demonstrate the potential of the new method, a vectorized Monte Carlo code for multigroup radiation transport analysis was developed. This code incorporates many features of conventional general-purpose production codes, including general geometry, splitting and Russian roulette, survival biasing, variance estimation via batching, a number of cutoffs, and generalized tallies of collision, tracklength, and surface crossing estimators with response functions. Predictions of vectorized performance characteristics for the CYBER-205 were made using emulated coding and a dynamic model of vector instruction timing. Computation rates were examined for a variety of test problems to determine sensitivities to batch size and vector lengths. Significant speedups are predicted for even a few hundred particles per batch, and asymptotic speedups by about 40 over equivalent Amdahl 470V/8 scalar codes arepredicted for a few thousand particles per batch. The principal conclusion is that vectorization of a general-purpose multigroup Monte Carlo code is well worth the significant effort required for stylized coding and major algorithmic changes

  7. Continuous energy Monte Carlo method based homogenization multi-group constants calculation

    International Nuclear Information System (INIS)

    Li Mancang; Wang Kan; Yao Dong

    2012-01-01

    The efficiency of the standard two-step reactor physics calculation relies on the accuracy of multi-group constants from the assembly-level homogenization process. In contrast to the traditional deterministic methods, generating the homogenization cross sections via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data bank can be used for a wide range of applications, resulting in the versatility using Monte Carlo codes for homogenization. As the first stage to realize Monte Carlo based lattice homogenization, the track length scheme is used as the foundation of cross section generation, which is straight forward. The scattering matrix and Legendre components, however, require special techniques. The Scattering Event method was proposed to solve the problem. There are no continuous energy counterparts in the Monte Carlo calculation for neutron diffusion coefficients. P 1 cross sections were used to calculate the diffusion coefficients for diffusion reactor simulator codes. B N theory is applied to take the leakage effect into account when the infinite lattice of identical symmetric motives is assumed. The MCMC code was developed and the code was applied in four assembly configurations to assess the accuracy and the applicability. At core-level, A PWR prototype core is examined. The results show that the Monte Carlo based multi-group constants behave well in average. The method could be applied to complicated configuration nuclear reactor core to gain higher accuracy. (authors)

  8. Use of the FLUKA Monte Carlo code for 3D patient-specific dosimetry on PET-CT and SPECT-CT images*

    Science.gov (United States)

    Botta, F; Mairani, A; Hobbs, R F; Vergara Gil, A; Pacilio, M; Parodi, K; Cremonesi, M; Coca Pérez, M A; Di Dia, A; Ferrari, M; Guerriero, F; Battistoni, G; Pedroli, G; Paganelli, G; Torres Aroche, L A; Sgouros, G

    2014-01-01

    Patient-specific absorbed dose calculation for nuclear medicine therapy is a topic of increasing interest. 3D dosimetry at the voxel level is one of the major improvements for the development of more accurate calculation techniques, as compared to the standard dosimetry at the organ level. This study aims to use the FLUKA Monte Carlo code to perform patient-specific 3D dosimetry through direct Monte Carlo simulation on PET-CT and SPECT-CT images. To this aim, dedicated routines were developed in the FLUKA environment. Two sets of simulations were performed on model and phantom images. Firstly, the correct handling of PET and SPECT images was tested under the assumption of homogeneous water medium by comparing FLUKA results with those obtained with the voxel kernel convolution method and with other Monte Carlo-based tools developed to the same purpose (the EGS-based 3D-RD software and the MCNP5-based MCID). Afterwards, the correct integration of the PET/SPECT and CT information was tested, performing direct simulations on PET/CT images for both homogeneous (water) and non-homogeneous (water with air, lung and bone inserts) phantoms. Comparison was performed with the other Monte Carlo tools performing direct simulation as well. The absorbed dose maps were compared at the voxel level. In the case of homogeneous water, by simulating 108 primary particles a 2% average difference with respect to the kernel convolution method was achieved; such difference was lower than the statistical uncertainty affecting the FLUKA results. The agreement with the other tools was within 3–4%, partially ascribable to the differences among the simulation algorithms. Including the CT-based density map, the average difference was always within 4% irrespective of the medium (water, air, bone), except for a maximum 6% value when comparing FLUKA and 3D-RD in air. The results confirmed that the routines were properly developed, opening the way for the use of FLUKA for patient-specific, image

  9. Uncertainty Propagation in Monte Carlo Depletion Analysis

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Kim, Yeong-il; Park, Ho Jin; Joo, Han Gyu; Kim, Chang Hyo

    2008-01-01

    A new formulation aimed at quantifying uncertainties of Monte Carlo (MC) tallies such as k eff and the microscopic reaction rates of nuclides and nuclide number densities in MC depletion analysis and examining their propagation behaviour as a function of depletion time step (DTS) is presented. It is shown that the variance of a given MC tally used as a measure of its uncertainty in this formulation arises from four sources; the statistical uncertainty of the MC tally, uncertainties of microscopic cross sections and nuclide number densities, and the cross correlations between them and the contribution of the latter three sources can be determined by computing the correlation coefficients between the uncertain variables. It is also shown that the variance of any given nuclide number density at the end of each DTS stems from uncertainties of the nuclide number densities (NND) and microscopic reaction rates (MRR) of nuclides at the beginning of each DTS and they are determined by computing correlation coefficients between these two uncertain variables. To test the viability of the formulation, we conducted MC depletion analysis for two sample depletion problems involving a simplified 7x7 fuel assembly (FA) and a 17x17 PWR FA, determined number densities of uranium and plutonium isotopes and their variances as well as k ∞ and its variance as a function of DTS, and demonstrated the applicability of the new formulation for uncertainty propagation analysis that need be followed in MC depletion computations. (authors)

  10. Investigation of electronic and magnetic properties of FeS: First principle and Monte Carlo simulations

    Science.gov (United States)

    Bouachraoui, Rachid; El Hachimi, Abdel Ghafour; Ziat, Younes; Bahmad, Lahoucine; Tahiri, Najim

    2018-06-01

    Electronic and magnetic properties of hexagonal Iron (II) Sulfide (hexagonal FeS) have been investigated by combining the Density functional theory (DFT) and Monte Carlo simulations (MCS). This compound is constituted by magnetic hexagonal lattice occupied by Fe2+ with spin state (S = 2). Based on ab initio method, we calculated the exchange coupling JFe-Fe between two magnetic atoms Fe-Fe in different directions. Also phase transitions, magnetic stability and magnetizations have been investigated in the framework of Monte Carlo simulations. Within this method, a second phase transition is observed at the Néel temperature TN = 450 K. This finding in good agreement with the reported data in the literature. The effect of the applied different parameters showed how can these parameters affect the critical temperature of this system. Moreover, we studied the density of states and found that the hexagonal FeS will be a promoting material for spintronic applications.

  11. Monte Carlo method in radiation transport problems

    International Nuclear Information System (INIS)

    Dejonghe, G.; Nimal, J.C.; Vergnaud, T.

    1986-11-01

    In neutral radiation transport problems (neutrons, photons), two values are important: the flux in the phase space and the density of particles. To solve the problem with Monte Carlo method leads to, among other things, build a statistical process (called the play) and to provide a numerical value to a variable x (this attribution is called score). Sampling techniques are presented. Play biasing necessity is proved. A biased simulation is made. At last, the current developments (rewriting of programs for instance) are presented due to several reasons: two of them are the vectorial calculation apparition and the photon and neutron transport in vacancy media [fr

  12. Level density and thermal properties in rare earth

    International Nuclear Information System (INIS)

    Schiller, A.; Guttormsen, M.; Hjorth-Jensen, M.; Melby, E.; Rekstad, J.; Siem, S.

    2001-01-01

    A convergent method to extract the nuclear level density and the γ-ray strength function from primary γ-ray spectra has been established. Thermodynamical quantities have been obtained within the microcanonical and canonical ensemble theory. Structures in the caloric curve and in the heat capacity curve are interpreted as fingerprints of breaking of Cooper pairs and quenching of pairing correlations. The strength function can be described using models and common parametrizations for the E1, M1, and pygmy resonance strength. However, a significant decrease of pygmy resonance strength at finite temperatures has been observed [ru

  13. Adjoint electron Monte Carlo calculations

    International Nuclear Information System (INIS)

    Jordan, T.M.

    1986-01-01

    Adjoint Monte Carlo is the most efficient method for accurate analysis of space systems exposed to natural and artificially enhanced electron environments. Recent adjoint calculations for isotropic electron environments include: comparative data for experimental measurements on electronics boxes; benchmark problem solutions for comparing total dose prediction methodologies; preliminary assessment of sectoring methods used during space system design; and total dose predictions on an electronics package. Adjoint Monte Carlo, forward Monte Carlo, and experiment are in excellent agreement for electron sources that simulate space environments. For electron space environments, adjoint Monte Carlo is clearly superior to forward Monte Carlo, requiring one to two orders of magnitude less computer time for relatively simple geometries. The solid-angle sectoring approximations used for routine design calculations can err by more than a factor of 2 on dose in simple shield geometries. For critical space systems exposed to severe electron environments, these potential sectoring errors demand the establishment of large design margins and/or verification of shield design by adjoint Monte Carlo/experiment

  14. Present Status and Extensions of the Monte Carlo Performance Benchmark

    Science.gov (United States)

    Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.

    2014-06-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.

  15. Present status and extensions of the Monte Carlo performance benchmark

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.; Petrovic, B.; Martin, W.R.

    2013-01-01

    The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed. (authors)

  16. Monte Carlo: Basics

    OpenAIRE

    Murthy, K. P. N.

    2001-01-01

    An introduction to the basics of Monte Carlo is given. The topics covered include, sample space, events, probabilities, random variables, mean, variance, covariance, characteristic function, chebyshev inequality, law of large numbers, central limit theorem (stable distribution, Levy distribution), random numbers (generation and testing), random sampling techniques (inversion, rejection, sampling from a Gaussian, Metropolis sampling), analogue Monte Carlo and Importance sampling (exponential b...

  17. Population-Level Density Dependence Influences the Origin and Maintenance of Parental Care.

    Science.gov (United States)

    Reyes, Elijah; Thrasher, Patsy; Bonsall, Michael B; Klug, Hope

    2016-01-01

    Parental care is a defining feature of animal breeding systems. We now know that both basic life-history characteristics and ecological factors influence the evolution of care. However, relatively little is known about how these factors interact to influence the origin and maintenance of care. Here, we expand upon previous work and explore the relationship between basic life-history characteristics (stage-specific rates of mortality and maturation) and the fitness benefits associated with the origin and the maintenance of parental care for two broad ecological scenarios: the scenario in which egg survival is density dependent and the case in which adult survival is density dependent. Our findings suggest that high offspring need is likely critical in driving the origin, but not the maintenance, of parental care regardless of whether density dependence acts on egg or adult survival. In general, parental care is more likely to result in greater fitness benefits when baseline adult mortality is low if 1) egg survival is density dependent or 2) adult mortality is density dependent and mutant density is relatively high. When density dependence acts on egg mortality, low rates of egg maturation and high egg densities are less likely to lead to strong fitness benefits of care. However, when density dependence acts on adult mortality, high levels of egg maturation and increasing adult densities are less likely to maintain care. Juvenile survival has relatively little, if any, effect on the origin and maintenance of egg-only care. More generally, our results suggest that the evolution of parental care will be influenced by an organism's entire life history characteristics, the stage at which density dependence acts, and whether care is originating or being maintained.

  18. Statistical theory of electron densities

    International Nuclear Information System (INIS)

    Pratt, L.R.; Hoffman, G.G.; Harris, R.A.

    1988-01-01

    An optimized Thomas--Fermi theory is proposed which retains the simplicity of the original theory and is a suitable reference theory for Monte Carlo density functional treatments of condensed materials. The key ingredient of the optimized theory is a neighborhood sampled potential which contains effects of the inhomogeneities in the one-electron potential. In contrast to the traditional Thomas--Fermi approach, the optimized theory predicts a finite electron density in the vicinity of a nucleus. Consideration of the example of an ideal electron gas subject to a central Coulomb field indicates that implementation of the approach is straightforward. The optimized theory is found to fail completely when a classically forbidden region is approached. However, these circumstances are not of primary interest for calculations of interatomic forces. It is shown how the energy functional of the density may be constructed by integration of a generalized Hellmann--Feynman relation. This generalized Hellmann--Feynman relation proves to be equivalent to the variational principle of density functional quantum mechanics, and, therefore, the present density theory can be viewed as a variational consequence of the constructed energy functional

  19. A Hamiltonian Monte–Carlo method for Bayesian inference of supermassive black hole binaries

    International Nuclear Information System (INIS)

    Porter, Edward K; Carré, Jérôme

    2014-01-01

    We investigate the use of a Hamiltonian Monte–Carlo to map out the posterior density function for supermassive black hole binaries. While previous Markov Chain Monte–Carlo (MCMC) methods, such as Metropolis–Hastings MCMC, have been successfully employed for a number of different gravitational wave sources, these methods are essentially random walk algorithms. The Hamiltonian Monte–Carlo treats the inverse likelihood surface as a ‘gravitational potential’ and by introducing canonical positions and momenta, dynamically evolves the Markov chain by solving Hamilton's equations of motion. This method is not as widely used as other MCMC algorithms due to the necessity of calculating gradients of the log-likelihood, which for most applications results in a bottleneck that makes the algorithm computationally prohibitive. We circumvent this problem by using accepted initial phase-space trajectory points to analytically fit for each of the individual gradients. Eliminating the waveform generation needed for the numerical derivatives reduces the total number of required templates for a 10 6 iteration chain from ∼10 9 to ∼10 6 . The result is in an implementation of the Hamiltonian Monte–Carlo that is faster, and more efficient by a factor of approximately the dimension of the parameter space, than a Hessian MCMC. (paper)

  20. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics

    International Nuclear Information System (INIS)

    Seker, V.; Thomas, J.W.; Downar, T.J.

    2007-01-01

    A computational code system based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR'. McSTAR is written in FORTRAN90 programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STARCD for every region. Three different methods were investigated and implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. Preliminary testing of the code was performed using a 3x3 PWR pin-cell problem. The preliminary results are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code DeCART. Good agreement in the k eff and the power profile was observed. Increased computational capabilities and improvements in computational methods have accelerated interest in high fidelity modeling of nuclear reactor cores during the last several years. High-fidelity has been achieved by utilizing full core neutron transport solutions for the neutronics calculation and computational fluid dynamics solutions for the thermal-hydraulics calculation. Previous researchers have reported the coupling of 3D deterministic neutron transport method to CFD and their application to practical reactor analysis problems. One of the principal motivations of the work here was to utilize Monte Carlo methods to validate the coupled deterministic neutron transport

  1. Application of the Monte Carlo method to diagnostic radiology

    International Nuclear Information System (INIS)

    Persliden, J.

    1986-01-01

    A Monte Carlo program for photon transport is developed. The program is used to investigate the energy imparted to water slabs (simulating patients), and the related backscattered and transmitted energies as functions of primary photon energy and water slab thickness. The accuracy of the results depends on the cross-section data for the probabilities of the various interactions in the slab and on the physical quantity calculated. Backscattered energy fractions can vary by as much as 10-20 %, using different sets of published data for the photoelectric cross section while imparted fractions are only slightly affected. The results are used to calculate improved conversion factors for determining the energy imparted to the patient in X-ray diagnostic examinations from measurements of the air collision kerma integrated over beam area. The small angle distribution of scattered photons transmitted through a water slab, relevant to problems of image quality, is calculated taking into account the diffraction phenomena of liquid water. The calculations are performed with a collision density estimator. This estimator makes it possible to calculate important physical quantities which are virtually impracticable to assess with the Monte Carlo codes commonly used in medical physics or in experiments. With the collision density estimator, the influence of air gaps on the reduction of scattered radiation is investigated for different detectors, field areas and primary X-ray spectra. Contrast degradation and contrast improvement factors are given as functions of field area for various air gaps. (With 105 refs.) (author)

  2. Minimal nuclear energy density functional

    Science.gov (United States)

    Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; Perez, Rodrigo Navarro; Schunck, Nicolas

    2018-04-01

    We present a minimal nuclear energy density functional (NEDF) called "SeaLL1" that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ɛr=0.022 fm and a standard deviation σr=0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body (NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body (NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. We identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.

  3. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  4. Study of thermodynamic and structural properties of a flexible homopolymer chain using advanced Monte Carlo methods

    Directory of Open Access Journals (Sweden)

    Hammou Amine Bouziane

    2013-03-01

    Full Text Available We study the thermodynamic and structural properties of a flexible homopolymer chain using both multi canonical Monte Carlo method and Wang-Landau method. In this work, we focus on the coil-globule transition. Starting from a completely random chain, we have obtained a globule for different sizes of the chain. The implementation of these advanced Monte Carlo methods allowed us to obtain a flat histogram in energy space and calculate various thermodynamic quantities such as the density of states, the free energy and the specific heat. Structural quantities such as the radius of gyration where also calculated.

  5. Level density in the complex scaling method

    International Nuclear Information System (INIS)

    Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki

    2005-01-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)

  6. Stability analysis and time-step limits for a Monte Carlo Compton-scattering method

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Warsa, James S.; Lowrie, Robert B.

    2010-01-01

    A Monte Carlo method for simulating Compton scattering in high energy density applications has been presented that models the photon-electron collision kinematics exactly [E. Canfield, W.M. Howard, E.P. Liang, Inverse Comptonization by one-dimensional relativistic electrons, Astrophys. J. 323 (1987) 565]. However, implementing this technique typically requires an explicit evaluation of the material temperature, which can lead to unstable and oscillatory solutions. In this paper, we perform a stability analysis of this Monte Carlo method and develop two time-step limits that avoid undesirable behavior. The first time-step limit prevents instabilities, while the second, more restrictive time-step limit avoids both instabilities and nonphysical oscillations. With a set of numerical examples, we demonstrate the efficacy of these time-step limits.

  7. Quantum Monte Carlo calculations of van der Waals interactions between aromatic benzene rings

    Science.gov (United States)

    Azadi, Sam; Kühne, T. D.

    2018-05-01

    The magnitude of finite-size effects and Coulomb interactions in quantum Monte Carlo simulations of van der Waals interactions between weakly bonded benzene molecules are investigated. To that extent, two trial wave functions of the Slater-Jastrow and Backflow-Slater-Jastrow types are employed to calculate the energy-volume equation of state. We assess the impact of the backflow coordinate transformation on the nonlocal correlation energy. We found that the effect of finite-size errors in quantum Monte Carlo calculations on energy differences is particularly large and may even be more important than the employed trial wave function. In addition to the cohesive energy, the singlet excitonic energy gap and the energy gap renormalization of crystalline benzene at different densities are computed.

  8. Monte Carlo theory and practice

    International Nuclear Information System (INIS)

    James, F.

    1987-01-01

    Historically, the first large-scale calculations to make use of the Monte Carlo method were studies of neutron scattering and absorption, random processes for which it is quite natural to employ random numbers. Such calculations, a subset of Monte Carlo calculations, are known as direct simulation, since the 'hypothetical population' of the narrower definition above corresponds directly to the real population being studied. The Monte Carlo method may be applied wherever it is possible to establish equivalence between the desired result and the expected behaviour of a stochastic system. The problem to be solved may already be of a probabilistic or statistical nature, in which case its Monte Carlo formulation will usually be a straightforward simulation, or it may be of a deterministic or analytic nature, in which case an appropriate Monte Carlo formulation may require some imagination and may appear contrived or artificial. In any case, the suitability of the method chosen will depend on its mathematical properties and not on its superficial resemblance to the problem to be solved. The authors show how Monte Carlo techniques may be compared with other methods of solution of the same physical problem

  9. Calculation of Monte Carlo importance functions for use in nuclear-well logging calculations

    International Nuclear Information System (INIS)

    Soran, P.D.; McKeon, D.C.; Booth, T.E.

    1989-07-01

    Importance sampling is essential to the timely solution of Monte Carlo nuclear-logging computer simulations. Achieving minimum variance (maximum precision) of a response in minimum computation time is one criteria for the choice of an importance function. Various methods for calculating importance functions will be presented, new methods investigated, and comparisons with porosity and density tools will be shown. 5 refs., 1 tab

  10. MC 93 - Proceedings of the International Conference on Monte Carlo Simulation in High Energy and Nuclear Physics

    Science.gov (United States)

    Dragovitsch, Peter; Linn, Stephan L.; Burbank, Mimi

    1994-01-01

    The Table of Contents for the book is as follows: * Preface * Heavy Fragment Production for Hadronic Cascade Codes * Monte Carlo Simulations of Space Radiation Environments * Merging Parton Showers with Higher Order QCD Monte Carlos * An Order-αs Two-Photon Background Study for the Intermediate Mass Higgs Boson * GEANT Simulation of Hall C Detector at CEBAF * Monte Carlo Simulations in Radioecology: Chernobyl Experience * UNIMOD2: Monte Carlo Code for Simulation of High Energy Physics Experiments; Some Special Features * Geometrical Efficiency Analysis for the Gamma-Neutron and Gamma-Proton Reactions * GISMO: An Object-Oriented Approach to Particle Transport and Detector Modeling * Role of MPP Granularity in Optimizing Monte Carlo Programming * Status and Future Trends of the GEANT System * The Binary Sectioning Geometry for Monte Carlo Detector Simulation * A Combined HETC-FLUKA Intranuclear Cascade Event Generator * The HARP Nucleon Polarimeter * Simulation and Data Analysis Software for CLAS * TRAP -- An Optical Ray Tracing Program * Solutions of Inverse and Optimization Problems in High Energy and Nuclear Physics Using Inverse Monte Carlo * FLUKA: Hadronic Benchmarks and Applications * Electron-Photon Transport: Always so Good as We Think? Experience with FLUKA * Simulation of Nuclear Effects in High Energy Hadron-Nucleus Collisions * Monte Carlo Simulations of Medium Energy Detectors at COSY Jülich * Complex-Valued Monte Carlo Method and Path Integrals in the Quantum Theory of Localization in Disordered Systems of Scatterers * Radiation Levels at the SSCL Experimental Halls as Obtained Using the CLOR89 Code System * Overview of Matrix Element Methods in Event Generation * Fast Electromagnetic Showers * GEANT Simulation of the RMC Detector at TRIUMF and Neutrino Beams for KAON * Event Display for the CLAS Detector * Monte Carlo Simulation of High Energy Electrons in Toroidal Geometry * GEANT 3.14 vs. EGS4: A Comparison Using the DØ Uranium/Liquid Argon

  11. Population-Level Density Dependence Influences the Origin and Maintenance of Parental Care.

    Directory of Open Access Journals (Sweden)

    Elijah Reyes

    Full Text Available Parental care is a defining feature of animal breeding systems. We now know that both basic life-history characteristics and ecological factors influence the evolution of care. However, relatively little is known about how these factors interact to influence the origin and maintenance of care. Here, we expand upon previous work and explore the relationship between basic life-history characteristics (stage-specific rates of mortality and maturation and the fitness benefits associated with the origin and the maintenance of parental care for two broad ecological scenarios: the scenario in which egg survival is density dependent and the case in which adult survival is density dependent. Our findings suggest that high offspring need is likely critical in driving the origin, but not the maintenance, of parental care regardless of whether density dependence acts on egg or adult survival. In general, parental care is more likely to result in greater fitness benefits when baseline adult mortality is low if 1 egg survival is density dependent or 2 adult mortality is density dependent and mutant density is relatively high. When density dependence acts on egg mortality, low rates of egg maturation and high egg densities are less likely to lead to strong fitness benefits of care. However, when density dependence acts on adult mortality, high levels of egg maturation and increasing adult densities are less likely to maintain care. Juvenile survival has relatively little, if any, effect on the origin and maintenance of egg-only care. More generally, our results suggest that the evolution of parental care will be influenced by an organism's entire life history characteristics, the stage at which density dependence acts, and whether care is originating or being maintained.

  12. Properties of a planar electric double layer under extreme conditions investigated by classical density functional theory and Monte Carlo simulations.

    Science.gov (United States)

    Zhou, Shiqi; Lamperski, Stanisław; Zydorczak, Maria

    2014-08-14

    Monte Carlo (MC) simulation and classical density functional theory (DFT) results are reported for the structural and electrostatic properties of a planar electric double layer containing ions having highly asymmetric diameters or valencies under extreme concentration condition. In the applied DFT, for the excess free energy contribution due to the hard sphere repulsion, a recently elaborated extended form of the fundamental measure functional is used, and coupling of Coulombic and short range hard-sphere repulsion is described by a traditional second-order functional perturbation expansion approximation. Comparison between the MC and DFT results indicates that validity interval of the traditional DFT approximation expands to high ion valences running up to 3 and size asymmetry high up to diameter ratio of 4 whether the high valence ions or the large size ion are co- or counter-ions; and to a high bulk electrolyte concentration being close to the upper limit of the electrolyte mole concentration the MC simulation can deal with well. The DFT accuracy dependence on the ion parameters can be self-consistently explained using arguments of liquid state theory, and new EDL phenomena such as overscreening effect due to monovalent counter-ions, extreme layering effect of counter-ions, and appearance of a depletion layer with almost no counter- and co-ions are observed.

  13. Monte Carlo and analytic simulations in nanoparticle-enhanced radiation therapy

    Directory of Open Access Journals (Sweden)

    Paro AD

    2016-09-01

    Full Text Available Autumn D Paro,1 Mainul Hossain,2 Thomas J Webster,1,3,4 Ming Su1,4 1Department of Chemical Engineering, Northeastern University, Boston, MA, USA; 2NanoScience Technology Center and School of Electrical Engineering and Computer Science, University of Central Florida, Orlando, Florida, USA; 3Excellence for Advanced Materials Research, King Abdulaziz University, Jeddah, Saudi Arabia; 4Wenzhou Institute of Biomaterials and Engineering, Chinese Academy of Science, Wenzhou Medical University, Zhejiang, People’s Republic of China Abstract: Analytical and Monte Carlo simulations have been used to predict dose enhancement factors in nanoparticle-enhanced X-ray radiation therapy. Both simulations predict an increase in dose enhancement in the presence of nanoparticles, but the two methods predict different levels of enhancement over the studied energy, nanoparticle materials, and concentration regime for several reasons. The Monte Carlo simulation calculates energy deposited by electrons and photons, while the analytical one only calculates energy deposited by source photons and photoelectrons; the Monte Carlo simulation accounts for electron–hole recombination, while the analytical one does not; and the Monte Carlo simulation randomly samples photon or electron path and accounts for particle interactions, while the analytical simulation assumes a linear trajectory. This study demonstrates that the Monte Carlo simulation will be a better choice to evaluate dose enhancement with nanoparticles in radiation therapy. Keywords: nanoparticle, dose enhancement, Monte Carlo simulation, analytical simulation, radiation therapy, tumor cell, X-ray 

  14. Adaptive anisotropic diffusion filtering of Monte Carlo dose distributions

    International Nuclear Information System (INIS)

    Miao Binhe; Jeraj, Robert; Bao Shanglian; Mackie, Thomas R

    2003-01-01

    The Monte Carlo method is the most accurate method for radiotherapy dose calculations, if used correctly. However, any Monte Carlo dose calculation is burdened with statistical noise. In this paper, denoising of Monte Carlo dose distributions with a three-dimensional adaptive anisotropic diffusion method was investigated. The standard anisotropic diffusion method was extended by changing the filtering parameters adaptively according to the local statistical noise. Smoothing of dose distributions with different noise levels in an inhomogeneous phantom, a conventional and an IMRT treatment case is shown. The resultant dose distributions were analysed using several evaluating criteria. It is shown that the adaptive anisotropic diffusion method can reduce statistical noise significantly (two to five times, corresponding to the reduction of simulation time by a factor of up to 20), while preserving important gradients of the dose distribution well. The choice of free parameters of the method was found to be fairly robust

  15. Spectral density of Cooper pairs in two level quantum dot–superconductors Josephson junction

    Energy Technology Data Exchange (ETDEWEB)

    Dhyani, A., E-mail: archana.d2003@gmail.com [Department of Physics, University of Petroleum and Energy Studies, Dehradun 248007, Uttarakhand (India); Rawat, P.S. [Department of Nuclear Science and Technology, University of Petroleum and Energy Studies, Dehradun 248007, Uttarakhand (India); Tewari, B.S., E-mail: bstewari@ddn.upes.ac.in [Department of Physics, University of Petroleum and Energy Studies, Dehradun 248007, Uttarakhand (India)

    2016-09-15

    Highlights: • The present work deals with the study of the electronic spectral density of electron pairs and its effect in charge transport in superconductor-quantum dot-superconductor junctions. • The charge transfer across such junctions can be controlled by changing the positions of the dot level. • The Josephson supercurrent can also be tuned by controlling the position of quantum dot energy levels. - Abstract: In the present paper, we report the role of quantum dot energy levels on the electronic spectral density for a two level quantum dot coupled to s-wave superconducting leads. The theoretical arguments in this work are based on the Anderson model so that it necessarily includes dot energies, single particle tunneling and superconducting order parameter for BCS superconductors. The expression for single particle spectral function is obtained by using the Green's function equation of motion technique. On the basis of numerical computation of spectral function of superconducting leads, it has been found that the charge transfer across such junctions can be controlled by the positions and availability of the dot levels.

  16. Automated-biasing approach to Monte Carlo shipping-cask calculations

    International Nuclear Information System (INIS)

    Hoffman, T.J.; Tang, J.S.; Parks, C.V.; Childs, R.L.

    1982-01-01

    Computer Sciences at Oak Ridge National Laboratory, under a contract with the Nuclear Regulatory Commission, has developed the SCALE system for performing standardized criticality, shielding, and heat transfer analyses of nuclear systems. During the early phase of shielding development in SCALE, it was established that Monte Carlo calculations of radiation levels exterior to a spent fuel shipping cask would be extremely expensive. This cost can be substantially reduced by proper biasing of the Monte Carlo histories. The purpose of this study is to develop and test an automated biasing procedure for the MORSE-SGC/S module of the SCALE system

  17. Development of radiation detection and measurement systems - Development of level gauge and density gauge

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Su Man; Kim, Sung Hun; Jang, Jung Hun; Yun, Mung Hun; Yun Jun Hyung; Kang, Sung Youn [Techvalley co., Ltd., Research Center, Seoul (Korea)

    2000-03-01

    -Pervasive effect of R and D results. Technical development of level/density measuring instruments has a definitely significant effect on the quality test of various products in the filled on the heavy industry. As measurement of flow increasingly becomes important in the plant design in the chemical industry, development of our products is applicable to various equipment in the field of industries. -Applications of R and D results. Technical development of level/density measurement copes with a technical difficulty in inspecting the internal conditions of chemical plants by transmission through metal materials in a non-destructive manner and thereby enables non-destructive flow and level tests in the field of industries. 11 refs., 19 figs., 4 tabs. (Author)

  18. Level Density In Interacting Boson-Fermion-Fermion Model (IBFFM) Of The Odd-Odd Nucleus 196Au

    International Nuclear Information System (INIS)

    Kabashi, Skender; Bekteshi, Sadik

    2007-01-01

    The level density of the odd-odd nucleus 196Au is investigated in the interacting boson-fermion-fermion model (IBFFM) which accounts for collectivity and complex interaction between quasiparticle and collective modes.The IBFFM total level density is fitted by Gaussian and its tail is also fitted by Bethe formula and constant temperature Fermi gas model

  19. Genetically elevated apolipoprotein A-I, high-density lipoprotein cholesterol levels, and risk of ischemic heart disease

    DEFF Research Database (Denmark)

    Lundegaard, Christiane; Tybjærg-Hansen, Anne; Grande, Peer

    2010-01-01

    Epidemiologically, levels of high-density lipoprotein (HDL) cholesterol and its major protein constituent, apolipoprotein A-I (apoA-I), are inversely related to risk of ischemic heart disease (IHD).......Epidemiologically, levels of high-density lipoprotein (HDL) cholesterol and its major protein constituent, apolipoprotein A-I (apoA-I), are inversely related to risk of ischemic heart disease (IHD)....

  20. Configuration Path Integral Monte Carlo. Ab initio simulations of fermions in the warm dense matter regime

    Energy Technology Data Exchange (ETDEWEB)

    Schoof, Tim

    2017-03-08

    The reliable quantum mechanical description of thermodynamic properties of fermionic many-body systems at high densities and strong degeneracy is of increasing interest due to recent experimental progress in generating systems that exhibit a non-trivial interplay of quantum, temperature, and coupling effects. While quantum Monte Carlo methods are among the most accurate approaches for the description of the ground state, finite-temperature path integral Monte Carlo (PIMC) simulations cannot correctly describe weakly to moderately coupled and strongly degenerate Fermi systems due to the so-called fermion sign problem. By switching from the coordinate representation to a basis of anti-symmetric Slater-determinants, the Configuration Path Integral Monte Carlo (CPIMC) approach greatly reduces the sign problem and allows for the exact computation of thermodynamic properties in this regime. During this work, the CPIMC algorithm was greatly improved in terms of efficiency and accessible observables. The first successful implementation of the diagrammatic worm algorithm for a general Hamiltonian in Fock space with arbitrary pair interactions gives direct access to the Matsubara Green function. This allows for the reconstruction of dynamic properties from simulations in thermodynamic equilibrium and significantly reduces the statistical variance of derived estimators, such as the one-particle density. The strongly improved MC sampling, the much more efficient calculation of update probabilities, and the successful parallelization to thousands of CPU cores, which have been achieved as part of the new implementation, are essential for the subsequent application of the method to much larger systems than in previous works. This thesis demonstrates the capabilities of the CPIMC approach for a model system of Coulomb interacting fermions in a two-dimensional harmonic trap. The correctness of the CPIMC implementation is verified by rigorous comparisons with an exact

  1. Monte Carlo Methods in Physics

    International Nuclear Information System (INIS)

    Santoso, B.

    1997-01-01

    Method of Monte Carlo integration is reviewed briefly and some of its applications in physics are explained. A numerical experiment on random generators used in the monte Carlo techniques is carried out to show the behavior of the randomness of various methods in generating them. To account for the weight function involved in the Monte Carlo, the metropolis method is used. From the results of the experiment, one can see that there is no regular patterns of the numbers generated, showing that the program generators are reasonably good, while the experimental results, shows a statistical distribution obeying statistical distribution law. Further some applications of the Monte Carlo methods in physics are given. The choice of physical problems are such that the models have available solutions either in exact or approximate values, in which comparisons can be mode, with the calculations using the Monte Carlo method. Comparison show that for the models to be considered, good agreement have been obtained

  2. Monte Carlo techniques in radiation therapy

    CERN Document Server

    Verhaegen, Frank

    2013-01-01

    Modern cancer treatment relies on Monte Carlo simulations to help radiotherapists and clinical physicists better understand and compute radiation dose from imaging devices as well as exploit four-dimensional imaging data. With Monte Carlo-based treatment planning tools now available from commercial vendors, a complete transition to Monte Carlo-based dose calculation methods in radiotherapy could likely take place in the next decade. Monte Carlo Techniques in Radiation Therapy explores the use of Monte Carlo methods for modeling various features of internal and external radiation sources, including light ion beams. The book-the first of its kind-addresses applications of the Monte Carlo particle transport simulation technique in radiation therapy, mainly focusing on external beam radiotherapy and brachytherapy. It presents the mathematical and technical aspects of the methods in particle transport simulations. The book also discusses the modeling of medical linacs and other irradiation devices; issues specific...

  3. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems

    KAUST Repository

    Efendiev, Yalchin R.

    2014-12-19

    In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.

  4. Monte Carlo simulation for IRRMA

    International Nuclear Information System (INIS)

    Gardner, R.P.; Liu Lianyan

    2000-01-01

    Monte Carlo simulation is fast becoming a standard approach for many radiation applications that were previously treated almost entirely by experimental techniques. This is certainly true for Industrial Radiation and Radioisotope Measurement Applications - IRRMA. The reasons for this include: (1) the increased cost and inadequacy of experimentation for design and interpretation purposes; (2) the availability of low cost, large memory, and fast personal computers; and (3) the general availability of general purpose Monte Carlo codes that are increasingly user-friendly, efficient, and accurate. This paper discusses the history and present status of Monte Carlo simulation for IRRMA including the general purpose (GP) and specific purpose (SP) Monte Carlo codes and future needs - primarily from the experience of the authors

  5. An introduction to applied quantum mechanics in the Wigner Monte Carlo formalism

    International Nuclear Information System (INIS)

    Sellier, J.M.; Nedjalkov, M.; Dimov, I.

    2015-01-01

    The Wigner formulation of quantum mechanics is a very intuitive approach which allows the comprehension and prediction of quantum mechanical phenomena in terms of quasi-distribution functions. In this review, our aim is to provide a detailed introduction to this theory along with a Monte Carlo method for the simulation of time-dependent quantum systems evolving in a phase-space. This work consists of three main parts. First, we introduce the Wigner formalism, then we discuss in detail the Wigner Monte Carlo method and, finally, we present practical applications. In particular, the Wigner model is first derived from the Schrödinger equation. Then a generalization of the formalism due to Moyal is provided, which allows to recover important mathematical properties of the model. Next, the Wigner equation is further generalized to the case of many-body quantum systems. Finally, a physical interpretation of the negative part of a quasi-distribution function is suggested. In the second part, the Wigner Monte Carlo method, based on the concept of signed (virtual) particles, is introduced in detail for the single-body problem. Two extensions of the Wigner Monte Carlo method to quantum many-body problems are introduced, in the frameworks of time-dependent density functional theory and ab-initio methods. Finally, in the third and last part of this paper, applications to single- and many-body problems are performed in the context of quantum physics and quantum chemistry, specifically focusing on the hydrogen, lithium and boron atoms, the H 2 molecule and a system of two identical Fermions. We conclude this work with a discussion on the still unexplored directions the Wigner Monte Carlo method could take in the next future

  6. Accuracy and borehole influences in pulsed neutron gamma density logging while drilling.

    Science.gov (United States)

    Yu, Huawei; Sun, Jianmeng; Wang, Jiaxin; Gardner, Robin P

    2011-09-01

    A new pulsed neutron gamma density (NGD) logging has been developed to replace radioactive chemical sources in oil logging tools. The present paper describes studies of near and far density measurement accuracy of NGD logging at two spacings and the borehole influences using Monte-Carlo simulation. The results show that the accuracy of near density is not as good as far density. It is difficult to correct this for borehole effects by using conventional methods because both near and far density measurement is significantly sensitive to standoffs and mud properties. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Correlation between CT numbers and tissue parameters needed for Monte Carlo simulations of clinical dose distributions

    Science.gov (United States)

    Schneider, Wilfried; Bortfeld, Thomas; Schlegel, Wolfgang

    2000-02-01

    We describe a new method to convert CT numbers into mass density and elemental weights of tissues required as input for dose calculations with Monte Carlo codes such as EGS4. As a first step, we calculate the CT numbers for 71 human tissues. To reduce the effort for the necessary fits of the CT numbers to mass density and elemental weights, we establish four sections on the CT number scale, each confined by selected tissues. Within each section, the mass density and elemental weights of the selected tissues are interpolated. For this purpose, functional relationships between the CT number and each of the tissue parameters, valid for media which are composed of only two components in varying proportions, are derived. Compared with conventional data fits, no loss of accuracy is accepted when using the interpolation functions. Assuming plausible values for the deviations of calculated and measured CT numbers, the mass density can be determined with an accuracy better than 0.04 g cm-3 . The weights of phosphorus and calcium can be determined with maximum uncertainties of 1 or 2.3 percentage points (pp) respectively. Similar values can be achieved for hydrogen (0.8 pp) and nitrogen (3 pp). For carbon and oxygen weights, errors up to 14 pp can occur. The influence of the elemental weights on the results of Monte Carlo dose calculations is investigated and discussed.

  8. Study on the relationship between serum testosterone level and forearm distal bone density in post-menopausal women

    International Nuclear Information System (INIS)

    Li Wenqi; Zhou Zhengli; Li Xin; Zhou Jiwen

    2002-01-01

    Objective: To study the relationship between the androgen level and bone density in post-menopausal women. Methods: Serum testosterone (T) level and forearm distal bone density (BMD) were measured in 39 past-menopausal women who had never taken any estrogen or calcium preparation. Their serum estradiol (E 2 ) levels were about the same. According to their BMD, the 39 subjects were divided into normal (n = 22) and osteoporotic (n = 17) groups. Results: The mean serum testosterone (T) level in the normal group was significantly higher than that in the osteoporotic group (p 1 = 0.72, r 2 0.75; p 1 and r 2 was 0.14, suggesting similarity of the positive cor-relationship for both groups (p > 0.05). Conclusion: Serum testosterone level seems to bear close relationship with bone density and osteoporosis

  9. VBFNLO. A patron level Monte Carlo for processes with electroweak bosons. Manual for Version 2.5.0

    Energy Technology Data Exchange (ETDEWEB)

    Arnold, K.; Bellm, J. [Karlsruhe Institute of Technology, Karlsruhe (Germany). Inst. fuer Theoretische Physik; Bozzi, G. [Milano-Bicocca Univ. (Italy). Dipt. di Fisica; INFN, Sezione di Milano-Bicocca (IT)] (and others)

    2011-08-15

    VBFNLO is a flexible parton level Monte Carlo program for the simulation of vector boson fusion, double and triple vector boson production in hadronic collisions at next-to-leading order (NLO) in the strong coupling constant, as well as Higgs boson plus two jet production via gluon fusion at the one-loop level. In the new release - Version 2.5.0 - several new processes have been added at NLO QCD: vector boson fusion production of a Higgs boson plus a photon, vector boson fusion production of a photon, W{gamma} and WZ production plus a hadronic jet and the triboson production processes WW{gamma}, ZZ{gamma}, WZ{gamma}, W{gamma}{gamma}, Z{gamma}{gamma} and {gamma}{gamma}{gamma}. The code has been extended to run in the Minimal Supersymmetric Standard Model (MSSM), and electroweak corrections to Higgs boson production via weak boson fusion have been included. Anomalous gauge boson couplings can be used in new processes and the Three-Site Higgsless model has been implemented for several processes. The simulation of Higgs boson production via gluon fusion has been improved. (orig.)

  10. Level density and thermal properties in rare earth nuclei

    International Nuclear Information System (INIS)

    Siem, S.; Schiller, A.; Guttormsen, M.; Hjorth-Jensen, M.; Melby, E.; Rekstad, J.

    2000-01-01

    The level density at low spin has been extracted for several nuclei in the rare earth region using the ( 3 He,α) reaction. Within the framework of the microcanonical ensemble, the entropy and the temperature of the nuclei are derived. The temperature curve shows bumps which are associated with the break up of Cooper pairs. The entropies of the even-even and even-odd nuclei have been compared. The nuclear heat capacity is deduced within the framework of the canonical ensemble and exhibits an S-formed shape as function of temperature. (author)

  11. Hybrid SN/Monte Carlo research and results

    International Nuclear Information System (INIS)

    Baker, R.S.

    1993-01-01

    The neutral particle transport equation is solved by a hybrid method that iteratively couples regions where deterministic (S N ) and stochastic (Monte Carlo) methods are applied. The Monte Carlo and S N regions are fully coupled in the sense that no assumption is made about geometrical separation or decoupling. The hybrid Monte Carlo/S N method provides a new means of solving problems involving both optically thick and optically thin regions that neither Monte Carlo nor S N is well suited for by themselves. The hybrid method has been successfully applied to realistic shielding problems. The vectorized Monte Carlo algorithm in the hybrid method has been ported to the massively parallel architecture of the Connection Machine. Comparisons of performance on a vector machine (Cray Y-MP) and the Connection Machine (CM-2) show that significant speedups are obtainable for vectorized Monte Carlo algorithms on massively parallel machines, even when realistic problems requiring variance reduction are considered. However, the architecture of the Connection Machine does place some limitations on the regime in which the Monte Carlo algorithm may be expected to perform well

  12. Low energy level density and surface instabilities in heavy transition nuclei

    International Nuclear Information System (INIS)

    Wieclawik, W. de; Foucher, R.; Dionisio, J.S.; Vieu, C.; Hoglund, A.; Watzig, W.

    1975-01-01

    A statistical analysis of Au, Pt, Hg nuclear levels was performed with Ericson's method. The odd mass gold experimental number of levels distributions are compared to the theoretical distributions corresponding to vibrational (Alaga and Kisslinger-Sorensen) and rotational (Stephens, Meyer-ter-Vehn) models. The Alaga model gives the most complete description of 193 Au, 195 Au levels and fits the lowest part of Gilbert-Cameron high energy distributions (deduced from the statistical model and neutron capture data). The Ericson's method shows other interesting features of Pt and Hg isotopes (i.e. level density dependence on nuclear shape and pairing correlations, evidence for phase transitions). Consequently, this method is a useful tool for guiding experimental as well as theoretical investigations of transition nuclei [fr

  13. A Monte Carlo study of radiation trapping effects

    International Nuclear Information System (INIS)

    Wang, J.B.; Williams, J.F.; Carter, C.J.

    1997-01-01

    A Monte Carlo simulation of radiative transfer in an atomic beam is carried out to investigate the effects of radiation trapping on electron-atom collision experiments. The collisionally excited atom is represented by a simple electric dipole, for which the emission intensity distribution is well known. The spatial distribution, frequency and free path of this and the sequential dipoles were determined by a computer random generator according to the probabilities given by quantum theory. By altering the atomic number density at the target site, the pressure dependence of the observed atomic lifetime, the angular intensity distribution and polarisation of the radiation field is studied. 7 refs., 5 figs

  14. Monte Carlo simulation of molecular flow in a neutral-beam injector and comparison with experiment

    International Nuclear Information System (INIS)

    Lillie, R.A.; Gabriel, T.A.; Schwenterly, S.W.; Alsmiller, R.G. Jr.; Santoro, R.T.

    1981-09-01

    Monte Carlo calculations have been performed to obtain estimates of the background gas pressure and molecular number density as a function of position in the PDX-prototype neutral beam injector which has undergone testing at the Oak Ridge National Laboratory. Estimates of these quantities together with the transient and steady-state energy deposition and molecular capture rates on the cryopanels of the cryocondensation pumps and the molecular escape rate from the injector were obtained utilizing a detailed geometric model of the neutral beam injector. The molecular flow calculations were performed using an existing Monte Carlo radiation transport code which was modified slightly to monitor the energy of the background gas molecules. The credibility of these calculations is demonstrated by the excellent agreement between the calculated and experimentally measured background gas pressure in front of the beamline calorimeter located in the downstream drift region of the injector. The usefulness of the calculational method as a design tool is illustrated by a comparison of the integrated beamline molecular density over the drift region of the injector for three modes of cryopump operation

  15. Prospective activity levels in the regions of the UKCS under different oil and gas prices: an application of the Monte Carlo technique

    International Nuclear Information System (INIS)

    Kemp, A.G.; Stephen, L.

    1999-01-01

    This paper summarises the results of a study using the Monte Carlo simulation to examine activity levels in the regions of the UK continental shelf under different oil and gas prices. Details of the methodology, data, and assumptions used are given, and the production of oil and gas, new field investment, aggregate operating expenditures, and gross revenues under different price scenarios are addressed. The total potential oil and gas production under the different price scenarios for 2000-2013 are plotted. (UK)

  16. Monte Carlo modeling of electron density in hypersonic rarefied gas flows

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Jin; Zhang, Yuhuai; Jiang, Jianzheng [State Key Laboratory of High Temperature Gas Dynamics, Institute of Mechanics, Chinese Academy of Sciences, Beijing 100190 (China)

    2014-12-09

    The electron density distribution around a vehicle employed in the RAM-C II flight test is calculated with the DSMC method. To resolve the mole fraction of electrons which is several orders lower than those of the primary species in the free stream, an algorithm named as trace species separation (TSS) is utilized. The TSS algorithm solves the primary and trace species separately, which is similar to the DSMC overlay techniques; however it generates new simulated molecules of trace species, such as ions and electrons in each cell, basing on the ionization and recombination rates directly, which differs from the DSMC overlay techniques based on probabilistic models. The electron density distributions computed by TSS agree well with the flight data measured in the RAM-C II test along a decent trajectory at three altitudes 81km, 76km, and 71km.

  17. Rotational-mode component of the density of levels of nuclei with A approx-lt 150

    International Nuclear Information System (INIS)

    Rastopchin, E.M.; Svirin, M.I.; Smirenkin, G.N.

    1992-01-01

    Some difficulties which arise in the use of the generalized superfluid model to describe the density of levels in the region A approx-lt 150, as the result of an imperfect understanding of collective nuclear excitations, are discussed. One possible way to overcome these difficulties is examined. The idea is to depart from the conventional classification of collective nuclear properties and make use of small static deformations predicted theoretically and a corresponding rotational-mode component of the density of levels of these nuclei

  18. Overview and applications of the Monte Carlo radiation transport kit at LLNL

    International Nuclear Information System (INIS)

    Sale, K. E.

    1999-01-01

    Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions

  19. Effect of planting methods, seed density and nitrogen phosphorus (NP) fertilizer levels on sweet corn (Zea maYs L.)

    International Nuclear Information System (INIS)

    Amin, M.; Razzaq, A.; Ullah, R.

    2006-01-01

    A field experiment was conducted to evaluate the effect of planting methods, seed density and nitrogen phosphorus (NP) fertilizer levels on emergence m/sup -2/ growth and grain yield of sweet corn. The fertilizer and interaction of fertilizer x seed density had significant negative effect with increasing level while seed density had a positive effect with increased density on emergence per m/sup 2/. Increased seed density significantly reduced plant growth which increased with application of higher fertilizer dose. The grain yield was improves by ridge planting methods, increased seed density and increased fertilizer levels. The highest grain yield (3,553.50 kg ha/sup-1/) of sweet corn plants was recorded in ridge planting method with highest NP fertilizer level of 300:150 kg ha/sup 1/ and 4 seeds hill/sup -1/. The lowest grain yield (3,493.75 kg ha/sup -1/) of sweet corn was observed in flat sowing planting method with 120:75 NP level and 2 seeds hill/sup -1/ seed density. The ridge planting rank first then furrow and flat planting methods on basis of grain yield per hectare. The sweet corn plant yield was high with 4 seeds hill/sup -1/ compared with 2 seeds hill/sup -1/. (author)

  20. (U) Introduction to Monte Carlo Methods

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-20

    Monte Carlo methods are very valuable for representing solutions to particle transport problems. Here we describe a “cook book” approach to handling the terms in a transport equation using Monte Carlo methods. Focus is on the mechanics of a numerical Monte Carlo code, rather than the mathematical foundations of the method.

  1. Kinetic Monte Carlo modeling of the efficiency roll-off in a multilayer white organic light-emitting device

    NARCIS (Netherlands)

    Mesta, M.; van Eersel, H.; Coehoorn, R.; Bobbert, P.A.

    2016-01-01

    Triplet-triplet annihilation (TTA) and triplet-polaron quenching (TPQ) in organic light-emitting devices (OLEDs) lead to a roll-off of the internal quantum efficiency (IQE) with increasing current density J. We employ a kinetic Monte Carlo modeling study to analyze the measured IQE and color balance

  2. Results of the Monte Carlo 'simple case' benchmark exercise

    International Nuclear Information System (INIS)

    2003-11-01

    A new 'simple case' benchmark intercomparison exercise was launched, intended to study the importance of the fundamental nuclear data constants, physics treatments and geometry model approximations, employed by Monte Carlo codes in common use. The exercise was also directed at determining the level of agreement which can be expected between measured and calculated quantities, using current state or the art modelling codes and techniques. To this end, measurements and Monte Carlo calculations of the total (or gross) neutron count rates have been performed using a simple moderated 3 He cylindrical proportional counter array or 'slab monitor' counting geometry, deciding to select a very simple geometry for this exercise

  3. Carlo Caso (1940 - 2007)

    CERN Multimedia

    Leonardo Rossi

    Carlo Caso (1940 - 2007) Our friend and colleague Carlo Caso passed away on July 7th, after several months of courageous fight against cancer. Carlo spent most of his scientific career at CERN, taking an active part in the experimental programme of the laboratory. His long and fruitful involvement in particle physics started in the sixties, in the Genoa group led by G. Tomasini. He then made several experiments using the CERN liquid hydrogen bubble chambers -first the 2000HBC and later BEBC- to study various facets of the production and decay of meson and baryon resonances. He later made his own group and joined the NA27 Collaboration to exploit the EHS Spectrometer with a rapid cycling bubble chamber as vertex detector. Amongst their many achievements, they were the first to measure, with excellent precision, the lifetime of the charmed D mesons. At the start of the LEP era, Carlo and his group moved to the DELPHI experiment, participating in the construction and running of the HPC electromagnetic c...

  4. Atomic-level computer simulation

    International Nuclear Information System (INIS)

    Adams, J.B.; Rockett, Angus; Kieffer, John; Xu Wei; Nomura, Miki; Kilian, K.A.; Richards, D.F.; Ramprasad, R.

    1994-01-01

    This paper provides a broad overview of the methods of atomic-level computer simulation. It discusses methods of modelling atomic bonding, and computer simulation methods such as energy minimization, molecular dynamics, Monte Carlo, and lattice Monte Carlo. ((orig.))

  5. Analysis of Monte Carlo methods for the simulation of photon transport

    International Nuclear Information System (INIS)

    Carlsson, G.A.; Kusoffsky, L.

    1975-01-01

    In connection with the transport of low-energy photons (30 - 140 keV) through layers of water of different thicknesses, various aspects of Monte Carlo methods are examined in order to improve their effectivity (to produce statistically more reliable results with shorter computer times) and to bridge the gap between more physical methods and more mathematical ones. The calculations are compared with results of experiments involving the simulation of photon transport, using direct methods and collision density ones (J.S.)

  6. A Monte-Carlo study of landmines detection by neutron backscattering method

    International Nuclear Information System (INIS)

    Maucec, M.; De Meijer, R.J.

    2000-01-01

    The use of Monte-Carlo simulations for modelling a simplified landmine detector system with a 252 Cf- neutron source is presented in this contribution. Different aspects and variety of external conditions, affecting the localisation and identification of a buried suspicious object (such as landmine) have been tested. Results of sensitivity calculations confirm that the landmine detection methods, based on the analysis of the backscattered neutron radiation can be applicable in higher density formations, with the mass fraction of present pore-water <15 %. (author)

  7. Monte Carlo description of gas flow from laser-evaporated silver

    DEFF Research Database (Denmark)

    Ellegaard, O.; Schou, Jørgen; Urbassek, H.M.

    1999-01-01

    and evaporation rates. These realistic experimental input parameters are further combined with a direct simulation Monte Carlo (DSMC) description of collisions in the gas flow of ablated surface atoms. With this method, new data of plume development and collision processes in the beginning of the ablation process...... can be extracted. It also allows us to identify important processes by comparing the computational results with experimental ones, such as density, energy, and angular distributions. Our main results deviate only slightly from an earlier study with constant surface temperature and evaporation rate...

  8. Noise-level determination for discrete spectra with Gaussian or Lorentzian probability density functions

    International Nuclear Information System (INIS)

    Moriya, Netzer

    2010-01-01

    A method, based on binomial filtering, to estimate the noise level of an arbitrary, smoothed pure signal, contaminated with an additive, uncorrelated noise component is presented. If the noise characteristics of the experimental spectrum are known, as for instance the type of the corresponding probability density function (e.g., Gaussian), the noise properties can be extracted. In such cases, both the noise level, as may arbitrarily be defined, and a simulated white noise component can be generated, such that the simulated noise component is statistically indistinguishable from the true noise component present in the original signal. In this paper we present a detailed analysis of the noise level extraction when the additive noise is Gaussian or Lorentzian. We show that the statistical parameters in these cases (mainly the variance and the half width at half maximum, respectively) can directly be obtained from the experimental spectrum even when the pure signal is erratic. Further discussion is given for cases where the noise probability density function is initially unknown.

  9. Simplest Validation of the HIJING Monte Carlo Model

    CERN Document Server

    Uzhinsky, V.V.

    2003-01-01

    Fulfillment of the energy-momentum conservation law, as well as the charge, baryon and lepton number conservation is checked for the HIJING Monte Carlo program in $pp$-interactions at $\\sqrt{s}=$ 200, 5500, and 14000 GeV. It is shown that the energy is conserved quite well. The transverse momentum is not conserved, the deviation from zero is at the level of 1--2 GeV/c, and it is connected with the hard jet production. The deviation is absent for soft interactions. Charge, baryon and lepton numbers are conserved. Azimuthal symmetry of the Monte Carlo events is studied, too. It is shown that there is a small signature of a "flow". The situation with the symmetry gets worse for nucleus-nucleus interactions.

  10. Unitary Dynamics of Strongly Interacting Bose Gases with the Time-Dependent Variational Monte Carlo Method in Continuous Space

    Science.gov (United States)

    Carleo, Giuseppe; Cevolani, Lorenzo; Sanchez-Palencia, Laurent; Holzmann, Markus

    2017-07-01

    We introduce the time-dependent variational Monte Carlo method for continuous-space Bose gases. Our approach is based on the systematic expansion of the many-body wave function in terms of multibody correlations and is essentially exact up to adaptive truncation. The method is benchmarked by comparison to an exact Bethe ansatz or existing numerical results for the integrable Lieb-Liniger model. We first show that the many-body wave function achieves high precision for ground-state properties, including energy and first-order as well as second-order correlation functions. Then, we study the out-of-equilibrium, unitary dynamics induced by a quantum quench in the interaction strength. Our time-dependent variational Monte Carlo results are benchmarked by comparison to exact Bethe ansatz results available for a small number of particles, and are also compared to quench action results available for noninteracting initial states. Moreover, our approach allows us to study large particle numbers and general quench protocols, previously inaccessible beyond the mean-field level. Our results suggest that it is possible to find correlated initial states for which the long-term dynamics of local density fluctuations is close to the predictions of a simple Boltzmann ensemble.

  11. An introduction to applied quantum mechanics in the Wigner Monte Carlo formalism

    Energy Technology Data Exchange (ETDEWEB)

    Sellier, J.M., E-mail: jeanmichel.sellier@parallel.bas.bg [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria); Nedjalkov, M. [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria); Institute for Microelectronics, TU Wien, Gußhausstraße 27-29/E360, 1040 Wien (Austria); Dimov, I. [IICT, Bulgarian Academy of Sciences, Acad. G. Bonchev str. 25A, 1113 Sofia (Bulgaria)

    2015-05-12

    The Wigner formulation of quantum mechanics is a very intuitive approach which allows the comprehension and prediction of quantum mechanical phenomena in terms of quasi-distribution functions. In this review, our aim is to provide a detailed introduction to this theory along with a Monte Carlo method for the simulation of time-dependent quantum systems evolving in a phase-space. This work consists of three main parts. First, we introduce the Wigner formalism, then we discuss in detail the Wigner Monte Carlo method and, finally, we present practical applications. In particular, the Wigner model is first derived from the Schrödinger equation. Then a generalization of the formalism due to Moyal is provided, which allows to recover important mathematical properties of the model. Next, the Wigner equation is further generalized to the case of many-body quantum systems. Finally, a physical interpretation of the negative part of a quasi-distribution function is suggested. In the second part, the Wigner Monte Carlo method, based on the concept of signed (virtual) particles, is introduced in detail for the single-body problem. Two extensions of the Wigner Monte Carlo method to quantum many-body problems are introduced, in the frameworks of time-dependent density functional theory and ab-initio methods. Finally, in the third and last part of this paper, applications to single- and many-body problems are performed in the context of quantum physics and quantum chemistry, specifically focusing on the hydrogen, lithium and boron atoms, the H{sub 2} molecule and a system of two identical Fermions. We conclude this work with a discussion on the still unexplored directions the Wigner Monte Carlo method could take in the next future.

  12. Uncertainty for Part Density Determination: An Update

    Energy Technology Data Exchange (ETDEWEB)

    Valdez, Mario Orlando [Los Alamos National Laboratory

    2016-12-14

    Accurate and precise density measurements by hydrostatic weighing requires the use of an analytical balance, configured with a suspension system, to both measure the weight of a part in water and in air. Additionally, the densities of these liquid media (water and air) must be precisely known for the part density determination. To validate the accuracy and precision of these measurements, uncertainty statements are required. The work in this report is a revision of an original report written more than a decade ago, specifically applying principles and guidelines suggested by the Guide to the Expression of Uncertainty in Measurement (GUM) for determining the part density uncertainty through sensitivity analysis. In this work, updated derivations are provided; an original example is revised with the updated derivations and appendix, provided solely to uncertainty evaluations using Monte Carlo techniques, specifically using the NIST Uncertainty Machine, as a viable alternative method.

  13. Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices

    International Nuclear Information System (INIS)

    Zhang, Guoqing

    2011-01-01

    different incident angles of neutrons, the responses were calculated. To correct the track overlapping effect for high track densities, density correction factors are computed with the Monte Carlo method. A computer code has been developed to handle all the calculations with different parameters. To verify the simulation results, experiments were performed.

  14. Monte Carlo simulation of mixed neutron-gamma radiation fields and dosimetry devices

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guoqing

    2011-12-22

    different incident angles of neutrons, the responses were calculated. To correct the track overlapping effect for high track densities, density correction factors are computed with the Monte Carlo method. A computer code has been developed to handle all the calculations with different parameters. To verify the simulation results, experiments were performed.

  15. Shell Effect and Temperature Influence on Nuclear Level Density Parameter: the role of the effective mass interaction

    International Nuclear Information System (INIS)

    Queipo-Ruiz, J.; Guzman-Martinez, F.; Rodriguez-Hoyos, O.

    2011-01-01

    The level density parameter is a very important ingredient in statistic study of nuclear reaction, it has been studied to low energies excitation E < 2MeV where it values is approximately constant, experimental results to energies of excitation more than 2 MeV has been obtained of evaporation spectrum, to nuclei with A=160. In this work we present a calculation of densities level parameter, for a wide range of mass and temperature, taking in accounts the shell effects and the mass effective interaction. The result has been carried out within the semi classical approximation, for the single particle level densities. We results have a reasonable agreement with the experimental data available. (Author)

  16. Level density and thermal properties in rare earth nuclei

    International Nuclear Information System (INIS)

    Schiller, A.; Guttormsen, M.; Hjorth-Jensen, M.; Melby, E.; Rekstad, J.; Siem, S.

    2001-01-01

    A convergent method to extract the nuclear level density and the γ-ray strength function from primary γ-ray spectra has been established. Thermodynamical quantities have been obtained within the microcanonical and canonical ensemble theory. Structures in the caloric curve and in the heat capacity curve are interpreted as fingerprints of breaking of Cooper pairs and quenching of pairing correlations. The strength function can be described using models and common parametrizations for the E1, M1, and pygmy resonance strength. However, a significant decrease of the pygmy resonance strength at finite temperatures has been observed

  17. Comparison of variability in breast density assessment by BI-RADS category according to the level of experience.

    Science.gov (United States)

    Eom, Hye-Joung; Cha, Joo Hee; Kang, Ji-Won; Choi, Woo Jung; Kim, Han Jun; Go, EunChae

    2018-05-01

    Background Only few studies have assessed variability in the results obtained by the readers with different experience levels in comparison with automated volumetric breast density measurements. Purpose To examine the variations in breast density assessment according to BI-RADS categories among readers with different experience levels and to compare it with the results of automated quantitative measurements. Material and Methods Density assignment was done for 1000 screening mammograms by six readers with three different experience levels (breast-imaging experts, general radiologists, and students). Agreement level between the results obtained by the readers and the Volpara automated volumetric breast density measurements was assessed. The agreement analysis using two categories-non-dense and dense breast tissue-was also performed. Results Intra-reader agreement for experts, general radiologists, and students were almost perfect or substantial (k = 0.74-0.95). The agreement between visual assessments of the breast-imaging experts and volumetric assessments by Volpara was substantial (k = 0.77). The agreement was moderate between the experts and general radiologists (k = 0.67) and slight between the students and Volpara (k = 0.01). The agreement for the two category groups (nondense and dense) was almost perfect between the experts and Volpara (k = 0.83). The agreement was substantial between the experts and general radiologists (k = 0.78). Conclusion We observed similar high agreement levels between visual assessments of breast density performed by radiologists and the volumetric assessments. However, agreement levels were substantially lower for the untrained readers.

  18. Monte Carlo uncertainty analysis of dose estimates in radiochromic film dosimetry with single-channel and multichannel algorithms.

    Science.gov (United States)

    Vera-Sánchez, Juan Antonio; Ruiz-Morales, Carmen; González-López, Antonio

    2018-03-01

    To provide a multi-stage model to calculate uncertainty in radiochromic film dosimetry with Monte-Carlo techniques. This new approach is applied to single-channel and multichannel algorithms. Two lots of Gafchromic EBT3 are exposed in two different Varian linacs. They are read with an EPSON V800 flatbed scanner. The Monte-Carlo techniques in uncertainty analysis provide a numerical representation of the probability density functions of the output magnitudes. From this numerical representation, traditional parameters of uncertainty analysis as the standard deviations and bias are calculated. Moreover, these numerical representations are used to investigate the shape of the probability density functions of the output magnitudes. Also, another calibration film is read in four EPSON scanners (two V800 and two 10000XL) and the uncertainty analysis is carried out with the four images. The dose estimates of single-channel and multichannel algorithms show a Gaussian behavior and low bias. The multichannel algorithms lead to less uncertainty in the final dose estimates when the EPSON V800 is employed as reading device. In the case of the EPSON 10000XL, the single-channel algorithms provide less uncertainty in the dose estimates for doses higher than four Gy. A multi-stage model has been presented. With the aid of this model and the use of the Monte-Carlo techniques, the uncertainty of dose estimates for single-channel and multichannel algorithms are estimated. The application of the model together with Monte-Carlo techniques leads to a complete characterization of the uncertainties in radiochromic film dosimetry. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Monte Carlo Simulation of a Solvated Ionic Polymer with Cluster Morphology

    National Research Council Canada - National Science Library

    Matthews, Jessica L; Lada, Emily K; Weiland, Lisa M; Smith, Ralph C; Leo, Donald J

    2005-01-01

    .... Traditional rotational isomeric state theory is applied in combination with a Monte Carlo methodology to develop a simulation model of the conformation of Nafion polymer chains on a nanoscopic level...

  20. Lectures on Monte Carlo methods

    CERN Document Server

    Madras, Neal

    2001-01-01

    Monte Carlo methods form an experimental branch of mathematics that employs simulations driven by random number generators. These methods are often used when others fail, since they are much less sensitive to the "curse of dimensionality", which plagues deterministic methods in problems with a large number of variables. Monte Carlo methods are used in many fields: mathematics, statistics, physics, chemistry, finance, computer science, and biology, for instance. This book is an introduction to Monte Carlo methods for anyone who would like to use these methods to study various kinds of mathemati

  1. Monte Carlo calculation of the cross-section of single event upset induced by 14MeV neutrons

    International Nuclear Information System (INIS)

    Li, H.; Deng, J.Y.; Chang, D.M.

    2005-01-01

    High-density static random access memory may experience single event upsets (SEU) in neutron environments. We present a new method to calculate the SEU cross-section. Our method is based on explicit generation and transport of the secondary reaction products and detailed accounting for energy loss by ionization. Instead of simulating the behavior of the circuit, we use the Monte Carlo method to simulate the process of energy deposition in sensitive volumes. Thus, we do not need to know details about the circuit. We only need a reasonable guess for the size of the sensitive volumes. In the Monte Carlo simulation, the cross-section of SEU induced by 14MeV neutrons is calculated. We can see that the Monte Carlo simulation not only can provide a new method to calculate SEU cross-section, but also can give a detailed description about random process of the SEU

  2. Particle-In-Cell/Monte Carlo Simulation of Ion Back Bombardment in Photoinjectors

    International Nuclear Information System (INIS)

    Qiang, Ji; Corlett, John; Staples, John

    2009-01-01

    In this paper, we report on studies of ion back bombardment in high average current dc and rf photoinjectors using a particle-in-cell/Monte Carlo method. Using H 2 ion as an example, we observed that the ion density and energy deposition on the photocathode in rf guns are order of magnitude lower than that in a dc gun. A higher rf frequency helps mitigate the ion back bombardment of the cathode in rf guns

  3. Effects of maximal doses of atorvastatin versus rosuvastatin on small dense low-density lipoprotein cholesterol levels

    Science.gov (United States)

    Maximal doses of atorvastatin and rosuvastatin are highly effective in lowering low-density lipoprotein (LDL) cholesterol and triglyceride levels; however, rosuvastatin has been shown to be significantly more effective than atorvastatin in lowering LDL cholesterol and in increasing high-density lipo...

  4. Monte Carlo simulation in nuclear medicine

    International Nuclear Information System (INIS)

    Morel, Ch.

    2007-01-01

    The Monte Carlo method allows for simulating random processes by using series of pseudo-random numbers. It became an important tool in nuclear medicine to assist in the design of new medical imaging devices, optimise their use and analyse their data. Presently, the sophistication of the simulation tools allows the introduction of Monte Carlo predictions in data correction and image reconstruction processes. The availability to simulate time dependent processes opens up new horizons for Monte Carlo simulation in nuclear medicine. In a near future, these developments will allow to tackle simultaneously imaging and dosimetry issues and soon, case system Monte Carlo simulations may become part of the nuclear medicine diagnostic process. This paper describes some Monte Carlo method basics and the sampling methods that were developed for it. It gives a referenced list of different simulation software used in nuclear medicine and enumerates some of their present and prospective applications. (author)

  5. Level densities and γ strength functions in light Sc and Ti isotopes

    International Nuclear Information System (INIS)

    Burger, A.; Larsen, A.C.; Syed, N.U.H.; Guttormsen, M.; Nyhus, H.; Siem, S.; Harissopulos, S.; Konstantinopoulos, T.; Lagoyannis, A.; Perdidakis, G.; Spyrou, A.; Kmiecik, M.; Mazurek, K.; Krticka, M.; Loennroth, T.; Norby, M.; Voinov, A.

    2010-01-01

    We present preliminary results from a measurement of nuclear level densities and the γ-ray strength of light Sc (Sc 43 , Sc 45 ) and Ti (Ti 44 , Ti 45 and Ti 46 ) isotopes using the Oslo Method. The article begins with a presentation of the experimental setup. (authors)

  6. Extraction of level density and γ strength function from primary γ spectra

    International Nuclear Information System (INIS)

    Schiller, A.; Bergholt, L.; Guttormsen, M.; Melby, E.; Rekstad, J.; Siem, S.

    2000-01-01

    We present a new iterative procedure to extract the level density and the γ strength function from primary γ spectra for energies close up to the neutron binding energy. The procedure is tested on simulated spectra and on data from the 173 Yb( 3 He,α) 172 Yb reaction

  7. Modeling granular phosphor screens by Monte Carlo methods

    International Nuclear Information System (INIS)

    Liaparinos, Panagiotis F.; Kandarakis, Ioannis S.; Cavouras, Dionisis A.; Delis, Harry B.; Panayiotakis, George S.

    2006-01-01

    The intrinsic phosphor properties are of significant importance for the performance of phosphor screens used in medical imaging systems. In previous analytical-theoretical and Monte Carlo studies on granular phosphor materials, values of optical properties, and light interaction cross sections were found by fitting to experimental data. These values were then employed for the assessment of phosphor screen imaging performance. However, it was found that, depending on the experimental technique and fitting methodology, the optical parameters of a specific phosphor material varied within a wide range of values, i.e., variations of light scattering with respect to light absorption coefficients were often observed for the same phosphor material. In this study, x-ray and light transport within granular phosphor materials was studied by developing a computational model using Monte Carlo methods. The model was based on the intrinsic physical characteristics of the phosphor. Input values required to feed the model can be easily obtained from tabulated data. The complex refractive index was introduced and microscopic probabilities for light interactions were produced, using Mie scattering theory. Model validation was carried out by comparing model results on x-ray and light parameters (x-ray absorption, statistical fluctuations in the x-ray to light conversion process, number of emitted light photons, output light spatial distribution) with previous published experimental data on Gd 2 O 2 S:Tb phosphor material (Kodak Min-R screen). Results showed the dependence of the modulation transfer function (MTF) on phosphor grain size and material packing density. It was predicted that granular Gd 2 O 2 S:Tb screens of high packing density and small grain size may exhibit considerably better resolution and light emission properties than the conventional Gd 2 O 2 S:Tb screens, under similar conditions (x-ray incident energy, screen thickness)

  8. First pregnancy characteristics, postmenopausal breast density, and salivary sex hormone levels in a population at high risk for breast cancer

    Directory of Open Access Journals (Sweden)

    Mary Mockus

    2015-06-01

    Conclusions and general significance: While reproductive characteristics, in particular parity, generally demonstrated independent associations with postmenopausal breast density and E, P and DHEA levels, T levels showed concordant inverse associations with age-at-first birth and breast density. These findings suggest that reproductive effects and later life salivary sex steroid hormone levels may have independent effects on later life breast density and cancer risk.

  9. Impact of Diet Supplemented by Coconut Milk on Corticosterone and Acute Phase Protein Level under High Stocking Density

    Directory of Open Access Journals (Sweden)

    Majid SHAKERI

    2016-05-01

    Full Text Available The purpose of this study was to investigate effects of coconut milk supplementation on corticosterone and acute phase protein level under high stocking density. A total 300 Cobb 500 male chicks were placed in cages and stocked as 10 birds/cage (normal stocking density and 15 birds/cage (high stocking density. The treatments were as (i control diet and stocked at 10 and 15 birds/cage (ii control diet + 3% coconut milk from 1-42 day and stocked at 10 and 15 birds/cage (iii control diet + 5% coconut milk from 1-42 day and stocked at 10 and 15 birds/cage. On day 42, 20 birds per treatment were slaughtered to collect blood samples. The results showed higher level of corticosterone and acute phase protein level in control diet compare to other supplemented diets with coconut milk. In conclusion, coconut milk decreased the level of corticosterone and acute phase protein when chicks were subjected to high stocking density.

  10. New model for mines and transportation tunnels external dose calculation using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Allam, Kh. A.

    2017-01-01

    In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3 19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. (authors)

  11. Effect of different level density prescriptions on the calculated neutron nuclear reaction cross sections

    International Nuclear Information System (INIS)

    Garg, S.B.

    1991-01-01

    A detailed investigation is carried out to determine the effect of different level density prescriptions on the computed neutron nuclear data of Ni-58 in the energy range 5-25 MeV. Calculations are performed in the framework of the multistep Hauser-Feshbach statistical theory including the Kalbach exciton model and Brink-Axel giant dipole resonance model for radiative capture. Level density prescriptions considered in this investigation are based on the original Gilbert-Cameron, improved Gilbert-Cameron, backshifted Fermi-gas and the Ignatyuk, et al. approaches. The effect of these prescriptions is discussed, with special reference to (n,p), (n,2n), (n,alpha) and total particle-production cross sections. (author). 17 refs, 8 figs

  12. Interplanetary Type III Bursts and Electron Density Fluctuations in the Solar Wind

    Science.gov (United States)

    Krupar, V.; Maksimovic, M.; Kontar, E. P.; Zaslavsky, A.; Santolik, O.; Soucek, J.; Kruparova, O.; Eastwood, J. P.; Szabo, A.

    2018-04-01

    Type III bursts are generated by fast electron beams originated from magnetic reconnection sites of solar flares. As propagation of radio waves in the interplanetary medium is strongly affected by random electron density fluctuations, type III bursts provide us with a unique diagnostic tool for solar wind remote plasma measurements. Here, we performed a statistical survey of 152 simple and isolated type III bursts observed by the twin-spacecraft Solar TErrestrial RElations Observatory mission. We investigated their time–frequency profiles in order to retrieve decay times as a function of frequency. Next, we performed Monte Carlo simulations to study the role of scattering due to random electron density fluctuations on time–frequency profiles of radio emissions generated in the interplanetary medium. For simplification, we assumed the presence of isotropic electron density fluctuations described by a power law with the Kolmogorov spectral index. Decay times obtained from observations and simulations were compared. We found that the characteristic exponential decay profile of type III bursts can be explained by the scattering of the fundamental component between the source and the observer despite restrictive assumptions included in the Monte Carlo simulation algorithm. Our results suggest that relative electron density fluctuations /{n}{{e}} in the solar wind are 0.06–0.07 over wide range of heliospheric distances.

  13. Monte Carlo simulation experiments on box-type radon dosimeter

    International Nuclear Information System (INIS)

    Jamil, Khalid; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid

    2014-01-01

    Epidemiological studies show that inhalation of radon gas ( 222 Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the 222 Rn concentrations (Bq/m 3 ) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter’s dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (η int ) and alpha hit efficiency (η hit ). The η int depends upon only on the dimensions of the dosimeter and η hit depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper explains that how radon

  14. Monte Carlo simulation experiments on box-type radon dosimeter

    Energy Technology Data Exchange (ETDEWEB)

    Jamil, Khalid, E-mail: kjamil@comsats.edu.pk; Kamran, Muhammad; Illahi, Ahsan; Manzoor, Shahid

    2014-11-11

    Epidemiological studies show that inhalation of radon gas ({sup 222}Rn) may be carcinogenic especially to mine workers, people living in closed indoor energy conserved environments and underground dwellers. It is, therefore, of paramount importance to measure the {sup 222}Rn concentrations (Bq/m{sup 3}) in indoors environments. For this purpose, box-type passive radon dosimeters employing ion track detector like CR-39 are widely used. Fraction of the number of radon alphas emitted in the volume of the box type dosimeter resulting in latent track formation on CR-39 is the latent track registration efficiency. Latent track registration efficiency is ultimately required to evaluate the radon concentration which consequently determines the effective dose and the radiological hazards. In this research, Monte Carlo simulation experiments were carried out to study the alpha latent track registration efficiency for box type radon dosimeter as a function of dosimeter’s dimensions and range of alpha particles in air. Two different self developed Monte Carlo simulation techniques were employed namely: (a) Surface ratio (SURA) method and (b) Ray hitting (RAHI) method. Monte Carlo simulation experiments revealed that there are two types of efficiencies i.e. intrinsic efficiency (η{sub int}) and alpha hit efficiency (η{sub hit}). The η{sub int} depends upon only on the dimensions of the dosimeter and η{sub hit} depends both upon dimensions of the dosimeter and range of the alpha particles. The total latent track registration efficiency is the product of both intrinsic and hit efficiencies. It has been concluded that if diagonal length of box type dosimeter is kept smaller than the range of alpha particle then hit efficiency is achieved as 100%. Nevertheless the intrinsic efficiency keeps playing its role. The Monte Carlo simulation experimental results have been found helpful to understand the intricate track registration mechanisms in the box type dosimeter. This paper

  15. Multi-level quantum monte Carlo wave functions for complex reactions: The decomposition of α-hydroxy-dimethylnitrosamine

    NARCIS (Netherlands)

    Fracchia, F.; Filippi, Claudia; Amovilli, C.

    2014-01-01

    We present here several novel features of our recently proposed Jastrow linear generalized valence bond (J-LGVB) wave functions, which allow a consistently accurate description of complex potential energy surfaces (PES) of medium-large systems within quantum Monte Carlo (QMC). In particular, we

  16. On the relation between the statistical γ-decay and the level density in 162Dy

    International Nuclear Information System (INIS)

    Henden, L.; Bergholt, L.; Guttormsen, M.; Rekstad, J.; Tveter, T.S.

    1994-12-01

    The level density of low-spin states (0-10ℎ) in 162 Dy has been determined from the ground state up to approximately 6 MeV of excitation energy. Levels in the excitation region up to 8 MeV were populated by means of the 163 Dy( 3 He, α) reaction, and the first-generation γ-rays in the decay of these states has been isolated. The energy distribution of the first-generation γ-rays provides a new source of information about the nuclear level density over a wide energy region. A broad peak is observed in the first-generation spectra, and the authors suggest an interpretation in terms of enhanced M1 transitions between different high-j Nilsson orbitals. 30 refs., 9 figs., 2 tabs

  17. Pairing in the BCS and LN approximations using continuum single particle level density

    International Nuclear Information System (INIS)

    Id Betan, R.M.; Repetto, C.E.

    2017-01-01

    Understanding the properties of drip line nuclei requires to take into account the correlations with the continuum spectrum of energy of the system. This paper has the purpose to show that the continuum single particle level density is a convenient way to consider the pairing correlation in the continuum. Isospin mean-field and isospin pairing strength are used to find the Bardeen–Cooper–Schrieffer (BCS) and Lipkin–Nogami (LN) approximate solutions of the pairing Hamiltonian. Several physical properties of the whole chain of the Tin isotope, as gap parameter, Fermi level, binding energy, and one- and two-neutron separation energies, were calculated and compared with other methods and with experimental data when they exist. It is shown that the use of the continuum single particle level density is an economical way to include explicitly the correlations with the continuum spectrum of energy in large scale mass calculation. It is also shown that the computed properties are in good agreement with experimental data and with more sophisticated treatment of the pairing interaction.

  18. Drift-Implicit Multi-Level Monte Carlo Tau-Leap Methods for Stochastic Reaction Networks

    KAUST Repository

    Ben Hammouda, Chiheb

    2015-05-12

    In biochemical systems, stochastic e↵ects can be caused by the presence of small numbers of certain reactant molecules. In this setting, discrete state-space and stochastic simulation approaches were proved to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of stochastic reaction networks (SRNs). Furthermore, in some cases, the dynamics of fast and slow time scales can be well separated and this is characterized by what is called sti↵ness. For such problems, the existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap method, can be very slow. Therefore, implicit tau-leap approxima- tions were developed to improve the numerical stability and provide more e cient simulation algorithms for these systems. One of the interesting tasks for SRNs is to approximate the expected values of some observables of the process at a certain fixed time T. This is can be achieved using Monte Carlo (MC) techniques. However, in a recent work, Anderson and Higham in 2013, proposed a more computationally e cient method which combines multi-level Monte Carlo (MLMC) technique with explicit tau-leap schemes. In this MSc thesis, we propose new fast stochastic algorithm, particularly designed 5 to address sti↵ systems, for approximating the expected values of some observables of SRNs. In fact, we take advantage of the idea of MLMC techniques and drift-implicit tau-leap approximation to construct a drift-implicit MLMC tau-leap estimator. In addition to accurately estimating the expected values of a given observable of SRNs at a final time T , our proposed estimator ensures the numerical stability with a lower cost than the MLMC explicit tau-leap algorithm, for systems including simultane- ously fast and slow species. The key contribution of our work is the coupling of two drift-implicit tau-leap paths, which is the basic brick for

  19. Reliability study of a prestressed concrete beam by Monte Carlo techniques

    International Nuclear Information System (INIS)

    Floris, C.; Migliacci, A.

    1987-01-01

    The safety of a prestressed beam is studied at the third probabilistic level and so calculating the probability of failure (P f ) under known loads. Since the beam is simply supported and subject only to loads perpendicular to its axis, only bending and shear loads are present. Since the ratio between the span and the clear height is over 20 with thus a very considerable shear span, it can be assumed that failure occurs entirely due to the bending moment, with shear having no effect. In order to calculate P f the probability density function (p.d.f.) have to be known both for the stress moment and the resisting moment. Attention here is focused on the construction of the latter. It is shown that it is practically impossible to find the required function analytically. On the other hand, numerical simulation with the help of a computer is particularly convenient. The so-called Monte Carlo techniques were chosen: they are based on the extraction of random numbers and are thus very suitable for simulating random events and quantities. (orig./HP)

  20. Alpha particle emission as a probe of the level density in highly excited A∼200 nuclei

    International Nuclear Information System (INIS)

    Fabris, D.; Fioretto, E.; Viesti, G.; Cinausero, M.; Gelli, N.; Hagel, K.; Lucarelli, F.; Natowitz, J.B.; Nebbia, G.; Prete, G.; Wada, R.

    1994-01-01

    The alpha particle emission from 90 to 140 MeV 19 F+ 181 Ta fusion-evaporation reactions has been studied. The comparisons of the experimental spectral shapes and multiplicities with statistical model predictions indicate a need to use an excitation energy dependent level-density parameter a=A/K in which K increases with excitation energy. This increase is more rapid than that in lower mass nuclei. The effect of this change in level density on the prescission multiplicities in fission is significant

  1. Dynamic Monte Carlo simulations of radiatively accelerated GRB fireballs

    Science.gov (United States)

    Chhotray, Atul; Lazzati, Davide

    2018-05-01

    We present a novel Dynamic Monte Carlo code (DynaMo code) that self-consistently simulates the Compton-scattering-driven dynamic evolution of a plasma. We use the DynaMo code to investigate the time-dependent expansion and acceleration of dissipationless gamma-ray burst fireballs by varying their initial opacities and baryonic content. We study the opacity and energy density evolution of an initially optically thick, radiation-dominated fireball across its entire phase space - in particular during the Rph matter-dominated fireballs due to Thomson scattering. We quantify the new phases by providing analytical expressions of Lorentz factor evolution, which will be useful for deriving jet parameters.

  2. Entropic sampling in the path integral Monte Carlo method

    International Nuclear Information System (INIS)

    Vorontsov-Velyaminov, P N; Lyubartsev, A P

    2003-01-01

    We have extended the entropic sampling Monte Carlo method to the case of path integral representation of a quantum system. A two-dimensional density of states is introduced into path integral form of the quantum canonical partition function. Entropic sampling technique within the algorithm suggested recently by Wang and Landau (Wang F and Landau D P 2001 Phys. Rev. Lett. 86 2050) is then applied to calculate the corresponding entropy distribution. A three-dimensional quantum oscillator is considered as an example. Canonical distributions for a wide range of temperatures are obtained in a single simulation run, and exact data for the energy are reproduced

  3. Monte Carlo - Advances and Challenges

    International Nuclear Information System (INIS)

    Brown, Forrest B.; Mosteller, Russell D.; Martin, William R.

    2008-01-01

    Abstract only, full text follows: With ever-faster computers and mature Monte Carlo production codes, there has been tremendous growth in the application of Monte Carlo methods to the analysis of reactor physics and reactor systems. In the past, Monte Carlo methods were used primarily for calculating k eff of a critical system. More recently, Monte Carlo methods have been increasingly used for determining reactor power distributions and many design parameters, such as β eff , l eff , τ, reactivity coefficients, Doppler defect, dominance ratio, etc. These advanced applications of Monte Carlo methods are now becoming common, not just feasible, but bring new challenges to both developers and users: Convergence of 3D power distributions must be assured; confidence interval bias must be eliminated; iterated fission probabilities are required, rather than single-generation probabilities; temperature effects including Doppler and feedback must be represented; isotopic depletion and fission product buildup must be modeled. This workshop focuses on recent advances in Monte Carlo methods and their application to reactor physics problems, and on the resulting challenges faced by code developers and users. The workshop is partly tutorial, partly a review of the current state-of-the-art, and partly a discussion of future work that is needed. It should benefit both novice and expert Monte Carlo developers and users. In each of the topic areas, we provide an overview of needs, perspective on past and current methods, a review of recent work, and discussion of further research and capabilities that are required. Electronic copies of all workshop presentations and material will be available. The workshop is structured as 2 morning and 2 afternoon segments: - Criticality Calculations I - convergence diagnostics, acceleration methods, confidence intervals, and the iterated fission probability, - Criticality Calculations II - reactor kinetics parameters, dominance ratio, temperature

  4. The effects of food level and conspecific density on biting and cannibalism in larval long-toed salamanders, Ambystoma macrodactylum.

    Science.gov (United States)

    Wildy, Erica L; Chivers, Douglas P; Kiesecker, Joseph M; Blaustein, Andrew R

    2001-07-01

    Previous studies have examined abiotic and biotic factors that facilitate agonistic behavior. For larval amphibians, food availability and conspecific density have been suggested as important factors influencing intraspecific aggression and cannibalism. In this study, we examined the separate and combined effects of food availability and density on the agonistic behavior and life history of larval long-toed salamanders, Ambystoma macrodactylum. We designed a 2×2 factorial experiment in which larvae were raised with either a high or low density of conspecifics and fed either a high or low level of food. For each treatment, we quantified the amount of group size variation, biting, and cannibalism occurring. Additionally, we examined survival to, time to and size at metamorphosis for all larvae. Results indicated that differences in both density and food level influenced all three life history traits measured. Moreover, differences in food level at which larvae were reared resulted in higher within-group size variation and heightened intraspecific biting while both density and food level contributed to increased cannibalism. We suggest that increased hunger levels and an uneven size structure promoted biting among larvae in the low food treatments. Moreover, these factors combined with a higher encounter rate with conspecifics in the high density treatments may have prompted larger individuals to seek an alternative food source in the form of smaller conspecifics.

  5. Parallel Sequential Monte Carlo for Efficient Density Combination: The Deco Matlab Toolbox

    DEFF Research Database (Denmark)

    Casarin, Roberto; Grassi, Stefano; Ravazzolo, Francesco

    This paper presents the Matlab package DeCo (Density Combination) which is based on the paper by Billio et al. (2013) where a constructive Bayesian approach is presented for combining predictive densities originating from different models or other sources of information. The combination weights...... for standard CPU computing and for Graphical Process Unit (GPU) parallel computing. For the GPU implementation we use the Matlab parallel computing toolbox and show how to use General Purposes GPU computing almost effortless. This GPU implementation comes with a speed up of the execution time up to seventy...... times compared to a standard CPU Matlab implementation on a multicore CPU. We show the use of the package and the computational gain of the GPU version, through some simulation experiments and empirical applications....

  6. Mean-field energy-level shifts and dielectric properties of strongly polarized Rydberg gases

    OpenAIRE

    Zhelyazkova, V.; Jirschik, R.; Hogan, S. D.

    2016-01-01

    Mean-field energy-level shifts arising as a result of strong electrostatic dipole interactions within dilute gases of polarized helium Rydberg atoms have been probed by microwave spectroscopy. The Rydberg states studied had principal quantum numbers n=70 and 72, and electric dipole moments of up to 14 050 D, and were prepared in pulsed supersonic beams at particle number densities on the order of 108 cm−3. Comparisons of the experimental data with the results of Monte Carlo calculations highl...

  7. Communication: Water on hexagonal boron nitride from diffusion Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Al-Hamdani, Yasmine S.; Ma, Ming; Michaelides, Angelos, E-mail: angelos.michaelides@ucl.ac.uk [Thomas Young Centre and London Centre for Nanotechnology, 17–19 Gordon Street, London WC1H 0AH (United Kingdom); Department of Chemistry, University College London, 20 Gordon Street, London WC1H 0AJ (United Kingdom); Alfè, Dario [Thomas Young Centre and London Centre for Nanotechnology, 17–19 Gordon Street, London WC1H 0AH (United Kingdom); Department of Earth Sciences, University College London, Gower Street, London WC1E 6BT (United Kingdom); Lilienfeld, O. Anatole von [Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials, Department of Chemistry, University of Basel, Klingelbergstrasse 80, CH-4056 Basel (Switzerland); Argonne Leadership Computing Facility, Argonne National Laboratories, 9700 S. Cass Avenue Argonne, Lemont, Illinois 60439 (United States)

    2015-05-14

    Despite a recent flurry of experimental and simulation studies, an accurate estimate of the interaction strength of water molecules with hexagonal boron nitride is lacking. Here, we report quantum Monte Carlo results for the adsorption of a water monomer on a periodic hexagonal boron nitride sheet, which yield a water monomer interaction energy of −84 ± 5 meV. We use the results to evaluate the performance of several widely used density functional theory (DFT) exchange correlation functionals and find that they all deviate substantially. Differences in interaction energies between different adsorption sites are however better reproduced by DFT.

  8. Non-Boltzmann Ensembles and Monte Carlo Simulations

    International Nuclear Information System (INIS)

    Murthy, K. P. N.

    2016-01-01

    Boltzmann sampling based on Metropolis algorithm has been extensively used for simulating a canonical ensemble and for calculating macroscopic properties of a closed system at desired temperatures. An estimate of a mechanical property, like energy, of an equilibrium system, is made by averaging over a large number microstates generated by Boltzmann Monte Carlo methods. This is possible because we can assign a numerical value for energy to each microstate. However, a thermal property like entropy, is not easily accessible to these methods. The reason is simple. We can not assign a numerical value for entropy, to a microstate. Entropy is not a property associated with any single microstate. It is a collective property of all the microstates. Toward calculating entropy and other thermal properties, a non-Boltzmann Monte Carlo technique called Umbrella sampling was proposed some forty years ago. Umbrella sampling has since undergone several metamorphoses and we have now, multi-canonical Monte Carlo, entropic sampling, flat histogram methods, Wang-Landau algorithm etc . This class of methods generates non-Boltzmann ensembles which are un-physical. However, physical quantities can be calculated as follows. First un-weight a microstates of the entropic ensemble; then re-weight it to the desired physical ensemble. Carry out weighted average over the entropic ensemble to estimate physical quantities. In this talk I shall tell you of the most recent non- Boltzmann Monte Carlo method and show how to calculate free energy for a few systems. We first consider estimation of free energy as a function of energy at different temperatures to characterize phase transition in an hairpin DNA in the presence of an unzipping force. Next we consider free energy as a function of order parameter and to this end we estimate density of states g ( E , M ), as a function of both energy E , and order parameter M . This is carried out in two stages. We estimate g ( E ) in the first stage

  9. Monte Carlo simulations of neutron-scattering instruments using McStas

    DEFF Research Database (Denmark)

    Nielsen, K.; Lefmann, K.

    2000-01-01

    Monte Carlo simulations have become an essential tool for improving the performance of neutron-scattering instruments, since the level of sophistication in the design of instruments is defeating purely analytical methods. The program McStas, being developed at Rise National Laboratory, includes...

  10. Computing thermal Wigner densities with the phase integration method

    International Nuclear Information System (INIS)

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-01-01

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems

  11. Computing thermal Wigner densities with the phase integration method.

    Science.gov (United States)

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  12. Uncertainty Analysis Based on Sparse Grid Collocation and Quasi-Monte Carlo Sampling with Application in Groundwater Modeling

    Science.gov (United States)

    Zhang, G.; Lu, D.; Ye, M.; Gunzburger, M.

    2011-12-01

    Markov Chain Monte Carlo (MCMC) methods have been widely used in many fields of uncertainty analysis to estimate the posterior distributions of parameters and credible intervals of predictions in the Bayesian framework. However, in practice, MCMC may be computationally unaffordable due to slow convergence and the excessive number of forward model executions required, especially when the forward model is expensive to compute. Both disadvantages arise from the curse of dimensionality, i.e., the posterior distribution is usually a multivariate function of parameters. Recently, sparse grid method has been demonstrated to be an effective technique for coping with high-dimensional interpolation or integration problems. Thus, in order to accelerate the forward model and avoid the slow convergence of MCMC, we propose a new method for uncertainty analysis based on sparse grid interpolation and quasi-Monte Carlo sampling. First, we construct a polynomial approximation of the forward model in the parameter space by using the sparse grid interpolation. This approximation then defines an accurate surrogate posterior distribution that can be evaluated repeatedly at minimal computational cost. Second, instead of using MCMC, a quasi-Monte Carlo method is applied to draw samples in the parameter space. Then, the desired probability density function of each prediction is approximated by accumulating the posterior density values of all the samples according to the prediction values. Our method has the following advantages: (1) the polynomial approximation of the forward model on the sparse grid provides a very efficient evaluation of the surrogate posterior distribution; (2) the quasi-Monte Carlo method retains the same accuracy in approximating the PDF of predictions but avoids all disadvantages of MCMC. The proposed method is applied to a controlled numerical experiment of groundwater flow modeling. The results show that our method attains the same accuracy much more efficiently

  13. A 3D particle Monte Carlo approach to studying nucleation

    DEFF Research Database (Denmark)

    Köhn, Christoph; Bødker Enghoff, Martin; Svensmark, Henrik

    2018-01-01

    The nucleation of sulphuric acid molecules plays a key role in the formation of aerosols. We here present a three dimensional particle Monte Carlo model to study the growth of sulphuric acid clusters as well as its dependence on the ambient temperature and the initial particle density. We initiate...... a swarm of sulphuric acid–water clusters with a size of 0.329 nm with densities between 107 and and 108 cm-3 at temperatures between 200 and 300 K and a relative humidity of 50%. After every time step, we update the position of particles as a function of size-dependent diffusion coefficients. If two...... particles encounter, we merge them and add their volumes and masses. Inversely, we check after every time step whether a polymer evaporates liberating a molecule. We present the spatial distribution as well as the size distribution calculated from individual clusters. We also calculate the nucleation rate...

  14. Elevated plasma low-density lipoprotein and high-density lipoprotein cholesterol levels in amenorrheic athletes: effects of endogenous hormone status and nutrient intake.

    Science.gov (United States)

    Friday, K E; Drinkwater, B L; Bruemmer, B; Chesnut, C; Chait, A

    1993-12-01

    To determine the interactive effects of hormones, exercise, and diet on plasma lipids and lipoproteins, serum estrogen and progesterone levels, nutrient intake, and plasma lipid, lipoprotein, and apolipoprotein concentrations were measured in 24 hypoestrogenic amenorrheic and 44 eumenorrheic female athletes. When compared to eumenorrheic athletes, amenorrheic athletes had higher levels of plasma cholesterol (5.47 +/- 0.17 vs. 4.84 +/- 0.12 mmol/L, P = 0.003), triglyceride (0.75 +/- 0.06 vs. 0.61 +/- 0.03 mmol/L, P = 0.046), low-density lipoprotein (LDL; 3.16 +/- 0.15 vs. 2.81 +/- 0.09 mmol/L, P = 0.037), high-density lipoprotein (HDL; 1.95 +/- 0.07 vs. 1.73 +/- 0.05 mmol/L, P = 0.007), and HDL2 (0.84 +/- 0.06 vs. 0.68 +/- 0.04 mmol/L, P = 0.02) cholesterol. Plasma LDL/HDL cholesterol ratios, very low-density lipoprotein and HDL3 cholesterol, and apolipoprotein A-I and A-II levels were similar in the two groups. Amenorrheic athletes consumed less fat than eumenorrheic subjects (52 +/- 5 vs. 75 +/- 3 g/day, P = 0.02), but similar amounts of calories, cholesterol, protein, carbohydrate, and ethanol. HDL cholesterol levels in amenorrheic subjects correlated positively with the percent of dietary calories from fat (r = 0.42, n = 23, P = 0.045) but negatively with the percent from protein (r = -0.49, n = 23, P = 0.017). Thus, exercise-induced amenorrhea may adversely affect cardiovascular risk by increasing plasma LDL and total cholesterol. However, cardioprotective elevations in plasma HDL and HDL2 cholesterol may neutralize the risk of cardiovascular disease in amenorrheic athletes.

  15. Application of Monte Carlo Method for Evaluation of Uncertainties of ITS-90 by Standard Platinum Resistance Thermometer

    Science.gov (United States)

    Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin

    2017-06-01

    Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.

  16. Fast sequential Monte Carlo methods for counting and optimization

    CERN Document Server

    Rubinstein, Reuven Y; Vaisman, Radislav

    2013-01-01

    A comprehensive account of the theory and application of Monte Carlo methods Based on years of research in efficient Monte Carlo methods for estimation of rare-event probabilities, counting problems, and combinatorial optimization, Fast Sequential Monte Carlo Methods for Counting and Optimization is a complete illustration of fast sequential Monte Carlo techniques. The book provides an accessible overview of current work in the field of Monte Carlo methods, specifically sequential Monte Carlo techniques, for solving abstract counting and optimization problems. Written by authorities in the

  17. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  18. Monte Carlo Treatment Planning for Advanced Radiotherapy

    DEFF Research Database (Denmark)

    Cronholm, Rickard

    This Ph.d. project describes the development of a workflow for Monte Carlo Treatment Planning for clinical radiotherapy plans. The workflow may be utilized to perform an independent dose verification of treatment plans. Modern radiotherapy treatment delivery is often conducted by dynamically...... modulating the intensity of the field during the irradiation. The workflow described has the potential to fully model the dynamic delivery, including gantry rotation during irradiation, of modern radiotherapy. Three corner stones of Monte Carlo Treatment Planning are identified: Building, commissioning...... and validation of a Monte Carlo model of a medical linear accelerator (i), converting a CT scan of a patient to a Monte Carlo compliant phantom (ii) and translating the treatment plan parameters (including beam energy, angles of incidence, collimator settings etc) to a Monte Carlo input file (iii). A protocol...

  19. Longitudinal development of hormone levels and grey matter density in 9 and 12-year-old twins.

    Science.gov (United States)

    Brouwer, Rachel M; Koenis, M M G; Schnack, Hugo G; van Baal, G Caroline; van Soelen, Inge L C; Boomsma, Dorret I; Hulshoff Pol, Hilleke E

    2015-05-01

    Puberty is characterized by major changes in hormone levels and structural changes in the brain. To what extent these changes are associated and to what extent genes or environmental influences drive such an association is not clear. We acquired circulating levels of luteinizing hormone, follicle stimulating hormone (FSH), estradiol and testosterone and magnetic resonance images of the brain from 190 twins at age 9 [9.2 (0.11) years; 99 females/91 males]. This protocol was repeated at age 12 [12.1 (0.26) years] in 125 of these children (59 females/66 males). Using voxel-based morphometry, we tested whether circulating hormone levels are associated with grey matter density in boys and girls in a longitudinal, genetically informative design. In girls, changes in FSH level between the age of 9 and 12 positively associated with changes in grey matter density in areas covering the left hippocampus, left (pre)frontal areas, right cerebellum, and left anterior cingulate and precuneus. This association was mainly driven by environmental factors unique to the individual (i.e. the non-shared environment). In 12-year-old girls, a higher level of circulating estradiol levels was associated with lower grey matter density in frontal and parietal areas. This association was driven by environmental factors shared among the members of a twin pair. These findings show a pattern of physical and brain development going hand in hand.

  20. Continuum level density of a coupled-channel system in the complex scaling method

    International Nuclear Information System (INIS)

    Suzuki, Ryusuke; Kato, Kiyoshi; Kruppa, Andras; Giraud, Bertrand G.

    2008-01-01

    We study the continuum level density (CLD) in the formalism of the complex scaling method (CSM) for coupled-channel systems. We apply the formalism to the 4 He=[ 3 H+p]+[ 3 He+n] coupled-channel cluster model where there are resonances at low energy. Numerical calculations of the CLD in the CSM with a finite number of L 2 basis functions are consistent with the exact result calculated from the S-matrix by solving coupled-channel equations. We also study channel densities. In this framework, the extended completeness relation (ECR) plays an important role. (author)

  1. Evaluation of CASMO-3 and HELIOS for Fuel Assembly Analysis from Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Hyung Jin; Song, Jae Seung; Lee, Chung Chan

    2007-05-15

    This report presents a study comparing deterministic lattice physics calculations with Monte Carlo calculations for LWR fuel pin and assembly problems. The study has focused on comparing results from the lattice physics code CASMO-3 and HELIOS against those from the continuous-energy Monte Carlo code McCARD. The comparisons include k{sub inf}, isotopic number densities, and pin power distributions. The CASMO-3 and HELIOS calculations for the k{sub inf}'s of the LWR fuel pin problems show good agreement with McCARD within 956pcm and 658pcm, respectively. For the assembly problems with Gadolinia burnable poison rods, the largest difference between the k{sub inf}'s is 1463pcm with CASMO-3 and 1141pcm with HELIOS. RMS errors for the pin power distributions of CASMO-3 and HELIOS are within 1.3% and 1.5%, respectively.

  2. Effects of pairing correlation on nuclear level density parameter and nucleon separation energy

    International Nuclear Information System (INIS)

    Rajesekaran, T.R.; Selvaraj, S.

    2002-01-01

    A systematic study of effects of pairing correlations on nuclear level density parameter 'a' and neutron separation energy S N is presented for 152 Gd using statistical theory of nuclei with deformation, collective and noncollective rotational degrees of freedom, shell effects, and pairing correlations

  3. Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations

    DEFF Research Database (Denmark)

    Kamran, Faisal; Andersen, Peter E.

    2015-01-01

    profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical...

  4. Core Calculation of 1 MWatt PUSPATI TRIGA Reactor (RTP) using Monte Carlo MVP Code System

    Science.gov (United States)

    Karim, Julia Abdul

    2008-05-01

    The Monte Carlo MVP code system was adopted for the Reaktor TRIGA PUSAPTI (RTP) core calculation. The code was developed by a group of researcher of Japan Atomic Energy Agency (JAEA) first in 1994. MVP is a general multi-purpose Monte Carlo code for neutron and photon transport calculation and able to estimate an accurate simulation problems. The code calculation is based on the continuous energy method. This code is capable of adopting an accurate physics model, geometry description and variance reduction technique faster than conventional method as compared to the conventional scalar method. This code could achieve higher computational speed by several factors on the vector super-computer. In this calculation, RTP core was modeled as close as possible to the real core and results of keff flux, fission densities and others were obtained.

  5. Core Calculation of 1 MWatt PUSPATI TRIGA Reactor (RTP) using Monte Carlo MVP Code System

    International Nuclear Information System (INIS)

    Karim, Julia Abdul

    2008-01-01

    The Monte Carlo MVP code system was adopted for the Reaktor TRIGA PUSAPTI (RTP) core calculation. The code was developed by a group of researcher of Japan Atomic Energy Agency (JAEA) first in 1994. MVP is a general multi-purpose Monte Carlo code for neutron and photon transport calculation and able to estimate an accurate simulation problems. The code calculation is based on the continuous energy method. This code is capable of adopting an accurate physics model, geometry description and variance reduction technique faster than conventional method as compared to the conventional scalar method. This code could achieve higher computational speed by several factors on the vector super-computer. In this calculation, RTP core was modeled as close as possible to the real core and results of keff flux, fission densities and others were obtained

  6. Test of E1-radiative strength function and level density models by 155 Gd (n,2γ) 156 Gd reaction

    International Nuclear Information System (INIS)

    Voinov, A.V.

    1996-01-01

    The information about the level density of 156 Gd nucleus and strength functions of γ transitions extracted from two γ-cascade spectra of the 155 Gd (n,2γ) 156 Gd reaction is analyzed. The method of statistical simulation of γ-cascade intensity is applied for calculation of the main parameters of experimental spectra. The method is used to extract the information on the E1-radiative strength function of γ transitions and level density in the 156 Gd nucleus. It is shown that at an excitation energy above 3 MeV the level density of 156 Gd nucleus must decrease in comparison with that calculated within the Fermi gas model. Its is concluded that possible explanation of the observed effect is connected with the influence of pairing correlations on the level density in nuclei

  7. Commissioning of a Monte Carlo treatment planning system for clinical use in radiation therapy; Evaluacion de un sistema de planificacion Monte Carlo de uso clinico para radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Zucca Aparcio, D.; Perez Moreno, J. M.; Fernandez Leton, P.; Garcia Ruiz-Zorrila, J.

    2016-10-01

    The commissioning procedures of a Monte Carlo treatment planning system (MC) for photon beams from a dedicated stereotactic body radiosurgery (SBRT) unit has been reported in this document. XVMC has been the MC Code available in the treatment planning system evaluated (BrainLAB iPlan RT Dose) which is based on Virtual Source Models that simulate the primary and scattered radiation, besides the electronic contamination, using gaussian components for whose modelling are required measurements of dose profiles, percentage depth dose and output factors, performed both in water and in air. The dosimetric accuracy of the particle transport simulation has been analyzed by validating the calculations in homogeneous and heterogeneous media versus measurements made under the same conditions as the dose calculation, and checking the stochastic behaviour of Monte Carlo calculations when using different statistical variances. Likewise, it has been verified how the planning system performs the conversion from dose to medium to dose to water, applying the stopping power ratio water to medium, in the presence of heterogeneities where this phenomenon is relevant, such as high density media (cortical bone). (Author)

  8. Correlation of electron transport and photocatalysis of nanocrystalline clusters studied by Monte-Carlo continuity random walking.

    Science.gov (United States)

    Liu, Baoshun; Li, Ziqiang; Zhao, Xiujian

    2015-02-21

    In this research, Monte-Carlo Continuity Random Walking (MC-RW) model was used to study the relation between electron transport and photocatalysis of nano-crystalline (nc) clusters. The effects of defect energy disorder, spatial disorder of material structure, electron density, and interfacial transfer/recombination on the electron transport and the photocatalysis were studied. Photocatalytic activity is defined as 1/τ from a statistical viewpoint with τ being the electron average lifetime. Based on the MC-RW simulation, a clear physical and chemical "picture" was given for the photocatalytic kinetic analysis of nc-clusters. It is shown that the increase of defect energy disorder and material spatial structural disorder, such as the decrease of defect trap number, the increase of crystallinity, the increase of particle size, and the increase of inter-particle connection, can enhance photocatalytic activity through increasing electron transport ability. The increase of electron density increases the electron Fermi level, which decreases the activation energy for electron de-trapping from traps to extending states, and correspondingly increases electron transport ability and photocatalytic activity. Reducing recombination of electrons and holes can increase electron transport through the increase of electron density and then increases the photocatalytic activity. In addition to the electron transport, the increase of probability for electrons to undergo photocatalysis can increase photocatalytic activity through the increase of the electron interfacial transfer speed.

  9. Statistical estimation Monte Carlo for unreliability evaluation of highly reliable system

    International Nuclear Information System (INIS)

    Xiao Gang; Su Guanghui; Jia Dounan; Li Tianduo

    2000-01-01

    Based on analog Monte Carlo simulation, statistical Monte Carlo methods for unreliable evaluation of highly reliable system are constructed, including direct statistical estimation Monte Carlo method and weighted statistical estimation Monte Carlo method. The basal element is given, and the statistical estimation Monte Carlo estimators are derived. Direct Monte Carlo simulation method, bounding-sampling method, forced transitions Monte Carlo method, direct statistical estimation Monte Carlo and weighted statistical estimation Monte Carlo are used to evaluate unreliability of a same system. By comparing, weighted statistical estimation Monte Carlo estimator has smallest variance, and has highest calculating efficiency

  10. Applications of Monte Carlo method in Medical Physics

    International Nuclear Information System (INIS)

    Diez Rios, A.; Labajos, M.

    1989-01-01

    The basic ideas of Monte Carlo techniques are presented. Random numbers and their generation by congruential methods, which underlie Monte Carlo calculations are shown. Monte Carlo techniques to solve integrals are discussed. The evaluation of a simple monodimensional integral with a known answer, by means of two different Monte Carlo approaches are discussed. The basic principles to simualate on a computer photon histories reduce variance and the current applications in Medical Physics are commented. (Author)

  11. Guideline of Monte Carlo calculation. Neutron/gamma ray transport simulation by Monte Carlo method

    CERN Document Server

    2002-01-01

    This report condenses basic theories and advanced applications of neutron/gamma ray transport calculations in many fields of nuclear energy research. Chapters 1 through 5 treat historical progress of Monte Carlo methods, general issues of variance reduction technique, cross section libraries used in continuous energy Monte Carlo codes. In chapter 6, the following issues are discussed: fusion benchmark experiments, design of ITER, experiment analyses of fast critical assembly, core analyses of JMTR, simulation of pulsed neutron experiment, core analyses of HTTR, duct streaming calculations, bulk shielding calculations, neutron/gamma ray transport calculations of the Hiroshima atomic bomb. Chapters 8 and 9 treat function enhancements of MCNP and MVP codes, and a parallel processing of Monte Carlo calculation, respectively. An important references are attached at the end of this report.

  12. Forward-weighted CADIS method for variance reduction of Monte Carlo calculations of distributions and multiple localized quantities

    International Nuclear Information System (INIS)

    Wagner, J. C.; Blakeman, E. D.; Peplow, D. E.

    2009-01-01

    This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is a variation on the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for some time to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development of an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain approximately uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented in the ADVANTG/MCNP framework and has been fully automated within the MAVRIC sequence of SCALE 6. Results of the application of the method to enabling the calculation of dose rates throughout an entire full-scale pressurized-water reactor facility are presented and discussed. (authors)

  13. Experience with the Monte Carlo Method

    Energy Technology Data Exchange (ETDEWEB)

    Hussein, E M.A. [Department of Mechanical Engineering University of New Brunswick, Fredericton, N.B., (Canada)

    2007-06-15

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed.

  14. Experience with the Monte Carlo Method

    International Nuclear Information System (INIS)

    Hussein, E.M.A.

    2007-01-01

    Monte Carlo simulation of radiation transport provides a powerful research and design tool that resembles in many aspects laboratory experiments. Moreover, Monte Carlo simulations can provide an insight not attainable in the laboratory. However, the Monte Carlo method has its limitations, which if not taken into account can result in misleading conclusions. This paper will present the experience of this author, over almost three decades, in the use of the Monte Carlo method for a variety of applications. Examples will be shown on how the method was used to explore new ideas, as a parametric study and design optimization tool, and to analyze experimental data. The consequences of not accounting in detail for detector response and the scattering of radiation by surrounding structures are two of the examples that will be presented to demonstrate the pitfall of condensed

  15. Monte Carlo simulation of near-infrared light propagation in realistic adult head models with hair follicles

    Science.gov (United States)

    Pan, Boan; Fang, Xiang; Liu, Weichao; Li, Nanxi; Zhao, Ke; Li, Ting

    2018-02-01

    Near infrared spectroscopy (NIRS) and diffuse correlation spectroscopy (DCS) has been used to measure brain activation, which are clinically important. Monte Carlo simulation has been applied to the near infrared light propagation model in biological tissue, and has the function of predicting diffusion and brain activation. However, previous studies have rarely considered hair and hair follicles as a contributing factor. Here, we attempt to use MCVM (Monte Carlo simulation based on 3D voxelized media) to examine light transmission, absorption, fluence, spatial sensitivity distribution (SSD) and brain activation judgement in the presence or absence of the hair follicles. The data in this study is a series of high-resolution cryosectional color photograph of a standing Chinse male adult. We found that the number of photons transmitted under the scalp decreases dramatically and the photons exported to detector is also decreasing, as the density of hair follicles increases. If there is no hair follicle, the above data increase and has the maximum value. Meanwhile, the light distribution and brain activation have a stable change along with the change of hair follicles density. The findings indicated hair follicles make influence of NIRS in light distribution and brain activation judgement.

  16. Monte Carlo alpha calculation

    Energy Technology Data Exchange (ETDEWEB)

    Brockway, D.; Soran, P.; Whalen, P.

    1985-01-01

    A Monte Carlo algorithm to efficiently calculate static alpha eigenvalues, N = ne/sup ..cap alpha..t/, for supercritical systems has been developed and tested. A direct Monte Carlo approach to calculating a static alpha is to simply follow the buildup in time of neutrons in a supercritical system and evaluate the logarithmic derivative of the neutron population with respect to time. This procedure is expensive, and the solution is very noisy and almost useless for a system near critical. The modified approach is to convert the time-dependent problem to a static ..cap alpha../sup -/eigenvalue problem and regress ..cap alpha.. on solutions of a/sup -/ k/sup -/eigenvalue problem. In practice, this procedure is much more efficient than the direct calculation, and produces much more accurate results. Because the Monte Carlo codes are intrinsically three-dimensional and use elaborate continuous-energy cross sections, this technique is now used as a standard for evaluating other calculational techniques in odd geometries or with group cross sections.

  17. Level density and gamma-ray strength in sup 2 sup 7 sup , sup 2 sup 8 Si

    CERN Document Server

    Guttormsen, M; Rekstad, J; Siem, S; Schiller, A; Lönnroth, T; Voinov, A

    2003-01-01

    A method to extract simultaneously level densities and gamma-ray transmission coefficients has for the first time been tested on light nuclei utilizing the sup 2 sup 8 Si( sup 3 He,alpha gamma) sup 2 sup 7 Si and sup 2 sup 8 Si( sup 3 He, sup 3 He'gamma) sup 2 sup 8 Si reactions. The extracted level densities for sup 2 sup 7 Si and sup 2 sup 8 Si are consistent with the level densities obtained by counting known levels in the respective nuclei. The extracted gamma-ray strength in sup 2 sup 8 Si agrees well with the known gamma-decay properties of this nucleus. Typical nuclear temperatures are found to be T approx 2.4 MeV at around 7 MeV excitation energy. The entropy gap between nuclei with mass number A and A +- 1 is measured to be delta S approx 1.0 k sub B , which indicates an energy spacing between single-particle orbitals comparable with typical nuclear temperatures.

  18. Free energy and phase equilibria for the restricted primitive model of ionic fluids from Monte Carlo simulations

    International Nuclear Information System (INIS)

    Orkoulas, G.; Panagiotopoulos, A.Z.

    1994-01-01

    In this work, we investigate the liquid--vapor phase transition of the restricted primitive model of ionic fluids. We show that at the low temperatures where the phase transition occurs, the system cannot be studied by conventional molecular simulation methods because convergence to equilibrium is slow. To accelerate convergence, we propose cluster Monte Carlo moves capable of moving more than one particle at a time. We then address the issue of charged particle transfers in grand canonical and Gibbs ensemble Monte Carlo simulations, for which we propose a biased particle insertion/destruction scheme capable of sampling short interparticle distances. We compute the chemical potential for the restricted primitive model as a function of temperature and density from grand canonical Monte Carlo simulations and the phase envelope from Gibbs Monte Carlo simulations. Our calculated phase coexistence curve is in agreement with recent results of Caillol obtained on the four-dimensional hypersphere and our own earlier Gibbs ensemble simulations with single-ion transfers, with the exception of the critical temperature, which is lower in the current calculations. Our best estimates for the critical parameters are T * c =0.053, ρ * c =0.025. We conclude with possible future applications of the biased techniques developed here for phase equilibrium calculations for ionic fluids

  19. Improving the description of collective effects within the combinatorial model of nuclear level densities

    International Nuclear Information System (INIS)

    Hilaire, S.; Girod, M.; Goriely, S.

    2011-01-01

    The combinatorial model of nuclear level densities has now reached a level of accuracy comparable to that of the best global analytical expressions without suffering from the limits imposed by the statistical hypothesis on which the latter expressions rely. In particular, it provides naturally, non Gaussian spin distribution as well as non equipartition of parities which are known to have a significant impact on cross section predictions at low energies. Our first global model developed in Ref. 1 suffered from deficiencies, in particular in the way the collective effects - both vibrational and rotational - were treated. We have recently improved this treatment using simultaneously the single particle levels and collective properties predicted by a newly derived Gogny interaction, therefore enabling a microscopic description of energy-dependent shell, pairing and deformation effects. In addition, for deformed nuclei, the transition to sphericity is coherently taken into account on the basis of a temperature-dependent Hartree-Fock calculation which provides at each temperature the structure properties needed to build the level densities. This new method is described and shown to give promising preliminary results with respect to available experimental data. (authors)

  20. Monte Carlo simulations of neutron scattering instruments

    International Nuclear Information System (INIS)

    Aestrand, Per-Olof; Copenhagen Univ.; Lefmann, K.; Nielsen, K.

    2001-01-01

    A Monte Carlo simulation is an important computational tool used in many areas of science and engineering. The use of Monte Carlo techniques for simulating neutron scattering instruments is discussed. The basic ideas, techniques and approximations are presented. Since the construction of a neutron scattering instrument is very expensive, Monte Carlo software used for design of instruments have to be validated and tested extensively. The McStas software was designed with these aspects in mind and some of the basic principles of the McStas software will be discussed. Finally, some future prospects are discussed for using Monte Carlo simulations in optimizing neutron scattering experiments. (R.P.)

  1. A novel muon detector for borehole density tomography

    Science.gov (United States)

    Bonneville, Alain; Kouzes, Richard T.; Yamaoka, Jared; Rowe, Charlotte; Guardincerri, Elena; Durham, J. Matthew; Morris, Christopher L.; Poulson, Daniel C.; Plaud-Ramos, Kenie; Morley, Deborah J.; Bacon, Jeffrey D.; Bynes, James; Cercillieux, Julien; Ketter, Chris; Le, Khanh; Mostafanezhad, Isar; Varner, Gary; Flygare, Joshua; Lintereur, Azaree T.

    2017-04-01

    Muons can be used to image the density of materials through which they pass, including geological structures. Subsurface applications of the technology include tracking fluid migration during injection or production, with increasing concern regarding such timely issues as induced seismicity or chemical leakage into aquifers. Current density monitoring options include gravimetric data collection and active or passive seismic surveys. One alternative, or complement, to these methods is the development of a muon detector that is sufficiently compact and robust for deployment in a borehole. Such a muon detector can enable imaging of density structure to monitor small changes in density - a proxy for fluid migration - at depths up to 1500 m. Such a detector has been developed, and Monte Carlo modeling methods applied to simulate the anticipated detector response. Testing and measurements using a prototype detector in the laboratory and shallow underground laboratory demonstrated robust response. A satisfactory comparison with a large drift tube-based muon detector is also presented.

  2. An update on the BQCD Hybrid Monte Carlo program

    Science.gov (United States)

    Haar, Taylor Ryan; Nakamura, Yoshifumi; Stüben, Hinnerk

    2018-03-01

    We present an update of BQCD, our Hybrid Monte Carlo program for simulating lattice QCD. BQCD is one of the main production codes of the QCDSF collaboration and is used by CSSM and in some Japanese finite temperature and finite density projects. Since the first publication of the code at Lattice 2010 the program has been extended in various ways. New features of the code include: dynamical QED, action modification in order to compute matrix elements by using Feynman-Hellman theory, more trace measurements (like Tr(D-n) for K, cSW and chemical potential reweighting), a more flexible integration scheme, polynomial filtering, term-splitting for RHMC, and a portable implementation of performance critical parts employing SIMD.

  3. Kinetic Monte Carlo simulations of water ice porosity: extrapolations of deposition parameters from the laboratory to interstellar space

    Science.gov (United States)

    Clements, Aspen R.; Berk, Brandon; Cooke, Ilsa R.; Garrod, Robin T.

    2018-02-01

    Using an off-lattice kinetic Monte Carlo model we reproduce experimental laboratory trends in the density of amorphous solid water (ASW) for varied deposition angle, rate and surface temperature. Extrapolation of the model to conditions appropriate to protoplanetary disks and interstellar dark clouds indicate that these ices may be less porous than laboratory ices.

  4. Extract of mangosteen increases high density lipoprotein levels in rats fed high lipid

    Directory of Open Access Journals (Sweden)

    Dwi Laksono Adiputro

    2013-04-01

    Full Text Available Background In cardiovascular medicine, Garcinia mangostana has been used as an antioxidant to inhibit oxidation of low density lipoproteins and as an antiobesity agent. The effect of Garcinia mangostana on hyperlipidemia is unknown. The aim of this study was to evaluate the effect of an ethanolic extract of Garcinia mangostana pericarp on lipid profile in rats fed a high lipid diet. Methods A total of 40 rats were divided into five groups control, high lipid diet, and high lipid diet + ethanolic extract of Garcinia mangostana pericarp at dosages of 200, 400, and 800 mg/kg body weight. The control group received a standard diet for 60 days. The high lipid diet group received standard diet plus egg yolk, goat fat, cholic acid, and pig fat for 60 days with or without ethanolic extract of Garcinia mangostana pericarp by the oral route. After 60 days, rats were anesthesized with ether for collection of blood by cardiac puncture. Analysis of blood lipid profile comprised colorimetric determination of cholesterol, triglyceride, low density lipoprotein (LDL, and high density lipoprotein (HDL. Results From the results of one-way ANOVA it was concluded that there were significant between-group differences in cholesterol, trygliceride, LDL, and HDL levels (p=0.000. Ethanolic extract of Garcinia mangostana pericarp significantly decreased cholesterol, trygliceride, and LDL levels, starting at 400 mg/kg body weight (p=0.000. Ethanolic extract of Garcinia mangostana pericarp significantly increased HDL level starting at 200 mg/kg body weight (p=0.000. Conclusion Ethanolic extract of Garcinia mangostana pericarp has a beneficial effect on lipid profile in rats on a high lipid diet.

  5. Extract of mangosteen increases high density lipoprotein levels in rats fed high lipid

    Directory of Open Access Journals (Sweden)

    Dwi Laksono Adiputro

    2015-12-01

    Full Text Available BACKGROUND In cardiovascular medicine, Garcinia mangostana has been used as an antioxidant to inhibit oxidation of low density lipoproteins and as an antiobesity agent. The effect of Garcinia mangostana on hyperlipidemia is unknown. The aim of this study was to evaluate the effect of an ethanolic extract of Garcinia mangostana pericarp on lipid profile in rats fed a high lipid diet. METHODS A total of 40 rats were divided into five groups control, high lipid diet, and high lipid diet + ethanolic extract of Garcinia mangostana pericarp at dosages of 200, 400, and 800 mg/kg body weight. The control group received a standard diet for 60 days. The high lipid diet group received standard diet plus egg yolk, goat fat, cholic acid, and pig fat for 60 days with or without ethanolic extract of Garcinia mangostana pericarp by the oral route. After 60 days, rats were anesthesized with ether for collection of blood by cardiac puncture. Analysis of blood lipid profile comprised colorimetric determination of cholesterol, triglyceride, low density lipoprotein (LDL, and high density lipoprotein (HDL. RESULTS From the results of one-way ANOVA it was concluded that there were significant between-group differences in cholesterol, trygliceride, LDL, and HDL levels (p=0.000. Ethanolic extract of Garcinia mangostana pericarp significantly decreased cholesterol, trygliceride, and LDL levels, starting at 400 mg/kg body weight (p=0.000. Ethanolic extract of Garcinia mangostana pericarp significantly increased HDL level starting at 200 mg/kg body weight (p=0.000. CONCLUSION Ethanolic extract of Garcinia mangostana pericarp has a beneficial effect on lipid profile in rats on a high lipid diet.

  6. Puzzle of magnetic moments of Ni clusters revisited using quantum Monte Carlo method.

    Science.gov (United States)

    Lee, Hung-Wen; Chang, Chun-Ming; Hsing, Cheng-Rong

    2017-02-28

    The puzzle of the magnetic moments of small nickel clusters arises from the discrepancy between values predicted using density functional theory (DFT) and experimental measurements. Traditional DFT approaches underestimate the magnetic moments of nickel clusters. Two fundamental problems are associated with this puzzle, namely, calculating the exchange-correlation interaction accurately and determining the global minimum structures of the clusters. Theoretically, the two problems can be solved using quantum Monte Carlo (QMC) calculations and the ab initio random structure searching (AIRSS) method correspondingly. Therefore, we combined the fixed-moment AIRSS and QMC methods to investigate the magnetic properties of Ni n (n = 5-9) clusters. The spin moments of the diffusion Monte Carlo (DMC) ground states are higher than those of the Perdew-Burke-Ernzerhof ground states and, in the case of Ni 8-9 , two new ground-state structures have been discovered using the DMC calculations. The predicted results are closer to the experimental findings, unlike the results predicted in previous standard DFT studies.

  7. Linear filtering applied to Monte Carlo criticality calculations

    International Nuclear Information System (INIS)

    Morrison, G.W.; Pike, D.H.; Petrie, L.M.

    1975-01-01

    A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed

  8. Burnup calculations using Monte Carlo method

    International Nuclear Information System (INIS)

    Ghosh, Biplab; Degweker, S.B.

    2009-01-01

    In the recent years, interest in burnup calculations using Monte Carlo methods has gained momentum. Previous burn up codes have used multigroup transport theory based calculations followed by diffusion theory based core calculations for the neutronic portion of codes. The transport theory methods invariably make approximations with regard to treatment of the energy and angle variables involved in scattering, besides approximations related to geometry simplification. Cell homogenisation to produce diffusion, theory parameters adds to these approximations. Moreover, while diffusion theory works for most reactors, it does not produce accurate results in systems that have strong gradients, strong absorbers or large voids. Also, diffusion theory codes are geometry limited (rectangular, hexagonal, cylindrical, and spherical coordinates). Monte Carlo methods are ideal to solve very heterogeneous reactors and/or lattices/assemblies in which considerable burnable poisons are used. The key feature of this approach is that Monte Carlo methods permit essentially 'exact' modeling of all geometrical detail, without resort to ene and spatial homogenization of neutron cross sections. Monte Carlo method would also be better for in Accelerator Driven Systems (ADS) which could have strong gradients due to the external source and a sub-critical assembly. To meet the demand for an accurate burnup code, we have developed a Monte Carlo burnup calculation code system in which Monte Carlo neutron transport code is coupled with a versatile code (McBurn) for calculating the buildup and decay of nuclides in nuclear materials. McBurn is developed from scratch by the authors. In this article we will discuss our effort in developing the continuous energy Monte Carlo burn-up code, McBurn. McBurn is intended for entire reactor core as well as for unit cells and assemblies. Generally, McBurn can do burnup of any geometrical system which can be handled by the underlying Monte Carlo transport code

  9. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  10. Dose response evaluation of a low-density normoxic polymer gel dosimeter using MRI

    Energy Technology Data Exchange (ETDEWEB)

    Haraldsson, P [Medical Radiation Physics, Department of Clinical Sciences, Lund University, Malmoe University Hospital, SE-205 02 Malmoe (Sweden); Department of Radiation Physics, Finsen Centre, Copenhagen University Hospital, DK-2100 Copenhagen (Denmark); Karlsson, A [Medical Radiation Physics, Department of Clinical Sciences, Lund University, Malmoe University Hospital, SE-205 02 Malmoe (Sweden); Wieslander, E [Medical Radiation Physics, Department of Clinical Sciences, Lund University Hospital, SE-221 85 Lund (Sweden); Gustavsson, H [Medical Radiation Physics, Department of Clinical Sciences, Lund University, Malmoe University Hospital, SE-205 02 Malmoe (Sweden); Baeck, S A J [Medical Radiation Physics, Department of Clinical Sciences, Lund University, Malmoe University Hospital, SE-205 02 Malmoe (Sweden)

    2006-02-21

    A low-density ({approx}0.6 g cm{sup -3}) normoxic polymer gel, containing the antioxidant tetrakis (hydroxymethyl) phosponium (THP), has been investigated with respect to basic absorbed dose response characteristics. The low density was obtained by mixing the gel with expanded polystyrene spheres. The depth dose data for 6 and 18 MV photons were compared with Monte Carlo calculations. A large volume phantom was irradiated in order to study the 3D dose distribution from a 6 MV field. Evaluation of the gel was carried out using magnetic resonance imaging. An approximately linear response was obtained for 1/T2 versus dose in the dose range of 2 to 8 Gy. A small decrease in the dose response was observed for increasing concentrations of THP. A good agreement between measured and Monte Carlo calculated data was obained, both for test tubes and the larger 3D phantom. It was shown that a normoxic polymer gel with a reduced density could be obtained by adding expanded polystyrene spheres. In order to get reliable results, it is very important to have a uniform distribution of the gel and expanded polystyrene spheres in the phantom volume.

  11. Dose response evaluation of a low-density normoxic polymer gel dosimeter using MRI

    Science.gov (United States)

    Haraldsson, P.; Karlsson, A.; Wieslander, E.; Gustavsson, H.; Bäck, S. Å. J.

    2006-02-01

    A low-density (~0.6 g cm-3) normoxic polymer gel, containing the antioxidant tetrakis (hydroxymethyl) phosponium (THP), has been investigated with respect to basic absorbed dose response characteristics. The low density was obtained by mixing the gel with expanded polystyrene spheres. The depth dose data for 6 and 18 MV photons were compared with Monte Carlo calculations. A large volume phantom was irradiated in order to study the 3D dose distribution from a 6 MV field. Evaluation of the gel was carried out using magnetic resonance imaging. An approximately linear response was obtained for 1/T2 versus dose in the dose range of 2 to 8 Gy. A small decrease in the dose response was observed for increasing concentrations of THP. A good agreement between measured and Monte Carlo calculated data was obained, both for test tubes and the larger 3D phantom. It was shown that a normoxic polymer gel with a reduced density could be obtained by adding expanded polystyrene spheres. In order to get reliable results, it is very important to have a uniform distribution of the gel and expanded polystyrene spheres in the phantom volume.

  12. Continuous-time quantum Monte Carlo impurity solvers

    Science.gov (United States)

    Gull, Emanuel; Werner, Philipp; Fuchs, Sebastian; Surer, Brigitte; Pruschke, Thomas; Troyer, Matthias

    2011-04-01

    Continuous-time quantum Monte Carlo impurity solvers are algorithms that sample the partition function of an impurity model using diagrammatic Monte Carlo techniques. The present paper describes codes that implement the interaction expansion algorithm originally developed by Rubtsov, Savkin, and Lichtenstein, as well as the hybridization expansion method developed by Werner, Millis, Troyer, et al. These impurity solvers are part of the ALPS-DMFT application package and are accompanied by an implementation of dynamical mean-field self-consistency equations for (single orbital single site) dynamical mean-field problems with arbitrary densities of states. Program summaryProgram title: dmft Catalogue identifier: AEIL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: ALPS LIBRARY LICENSE version 1.1 No. of lines in distributed program, including test data, etc.: 899 806 No. of bytes in distributed program, including test data, etc.: 32 153 916 Distribution format: tar.gz Programming language: C++ Operating system: The ALPS libraries have been tested on the following platforms and compilers: Linux with GNU Compiler Collection (g++ version 3.1 and higher), and Intel C++ Compiler (icc version 7.0 and higher) MacOS X with GNU Compiler (g++ Apple-version 3.1, 3.3 and 4.0) IBM AIX with Visual Age C++ (xlC version 6.0) and GNU (g++ version 3.1 and higher) compilers Compaq Tru64 UNIX with Compq C++ Compiler (cxx) SGI IRIX with MIPSpro C++ Compiler (CC) HP-UX with HP C++ Compiler (aCC) Windows with Cygwin or coLinux platforms and GNU Compiler Collection (g++ version 3.1 and higher) RAM: 10 MB-1 GB Classification: 7.3 External routines: ALPS [1], BLAS/LAPACK, HDF5 Nature of problem: (See [2].) Quantum impurity models describe an atom or molecule embedded in a host material with which it can exchange electrons. They are basic to nanoscience as

  13. Level density approach to perturbation theory and inverse-energy-weighted sum-rules

    International Nuclear Information System (INIS)

    Halemane, T.R.

    1983-01-01

    The terms in the familiar Rayleigh-Schroedinger perturbation series involve eigenvalues and eigenfunctions of the unperturbed operator. A level density formalism, that does not involve computation of eigenvalues and eigenfunctions, is given here for the perturbation series. In the CLT (central limit theorem) limit the expressions take very simple linear forms. The evaluation is in terms of moments and traces of operators and operator products. 3 references

  14. The Possibility of Using Monte Carlo Method in the Case of Decision-Making under Conditions of Risk Concerning an Agricultural Economics Issue

    Directory of Open Access Journals (Sweden)

    Dominika Crnjac Milić

    2013-07-01

    Full Text Available Monte Carlo method is a probabilistic computer algorithm in which the value of one or more random variables is given by the density function, and the goal of which is to predict all the possible outcomes of a process it has been applied to and the probability of their occurrence. As such, the Monte Carlo method proves to be extremely useful in the process of decision- making under conditions of risk. This paper discusses an example of function optimization with the aim of finding a solution that will deliver the highest profits in the described agricultural economics-related problem under risk-free conditions. A Monte Carlo simulation is carried out and the solution under conditions of risk is also found. For that purpose, a special program code was written.

  15. Correlation analysis between bone density measured by quantitative CT and blood sugar level of aged patients with non-insulin-dependent diabetes mellitus

    International Nuclear Information System (INIS)

    Wang Guizhi; Liang Ping; Qiao Junhua; Liu Chunyan

    2008-01-01

    Objective: To approach the correlation between the bone density measured by quantitative CT and the blood sugar level of the aged patients with non-insulin-dependent diabetes mellitus, and observe the effects of the blood sugar level on the bone density. Methods: The lumbar bone densities and the blood sugar levels of 160 aged patients with non-insulin-dependent diabetes mellitus (hyperglycemia group 80 cases, euglycemia group 80 cases ) and the healthy aged people (80 cases) were detected by quantitative CT and serum biochemical detection; the correlation between the blood sugar level and the bone density and the osteoporosis occurrence status of aged people in various groups were analyzed. Results: The bone density in the non-insulin-dependent diabetes and hyperglycemia group was lower than those in normal (control) group and non-insulin-dependent diabetes and euglycemia group (P<0.05); the morbility of osteoporosis in the non-insulin-dependent diabetes and hyperglycemia group was higher than those in normal (control) group and non-insulin-dependent diabetes and euglycemia group (P<0.05); negative correlation was found between the bone density and the blood sugar level (aged male group: r=-0.7382, P=0.0013; aged female group: r=-0.8343, P=0.0007). Conclusion: The blood sugar level affects the bone density of the aged patients with non-insulin-dependent diabetes mellitus; the higher the blood sugar level, the lower the bone density. The non-insulin-dependent diabetes aged patients with hyperglycemia have the liability of osteoporosis. (authors)

  16. Adaptable three-dimensional Monte Carlo modeling of imaged blood vessels in skin

    Science.gov (United States)

    Pfefer, T. Joshua; Barton, Jennifer K.; Chan, Eric K.; Ducros, Mathieu G.; Sorg, Brian S.; Milner, Thomas E.; Nelson, J. Stuart; Welch, Ashley J.

    1997-06-01

    In order to reach a higher level of accuracy in simulation of port wine stain treatment, we propose to discard the typical layered geometry and cylindrical blood vessel assumptions made in optical models and use imaging techniques to define actual tissue geometry. Two main additions to the typical 3D, weighted photon, variable step size Monte Carlo routine were necessary to achieve this goal. First, optical low coherence reflectometry (OLCR) images of rat skin were used to specify a 3D material array, with each entry assigned a label to represent the type of tissue in that particular voxel. Second, the Monte Carlo algorithm was altered so that when a photon crosses into a new voxel, the remaining path length is recalculated using the new optical properties, as specified by the material array. The model has shown good agreement with data from the literature. Monte Carlo simulations using OLCR images of asymmetrically curved blood vessels show various effects such as shading, scattering-induced peaks at vessel surfaces, and directionality-induced gradients in energy deposition. In conclusion, this augmentation of the Monte Carlo method can accurately simulate light transport for a wide variety of nonhomogeneous tissue geometries.

  17. APPLICATION OF QUEUING THEORY TO AUTOMATED TELLER MACHINE (ATM) FACILITIES USING MONTE CARLO SIMULATION

    OpenAIRE

    UDOANYA RAYMOND MANUEL; ANIEKAN OFFIONG

    2014-01-01

    This paper presents the importance of applying queuing theory to the Automated Teller Machine (ATM) using Monte Carlo Simulation in order to determine, control and manage the level of queuing congestion found within the Automated Teller Machine (ATM) centre in Nigeria and also it contains the empirical data analysis of the queuing systems obtained at the Automated Teller Machine (ATM) located within the Bank premises for a period of three (3) months. Monte Carlo Simulation is applied to th...

  18. Criticality safety of low-density storage arrays

    International Nuclear Information System (INIS)

    Bauer, T. H.; Nuclear Engineering Division

    2005-01-01

    This paper proposes a straightforward bounding method for the criticality safety analysis of fissionable materials configured into large arrays of standard containers. While criticality-safe storage limits have been well established for single containers, even under flooded conditions, it is also necessary to rule out any potential for criticality arising from neutronic interactions among multiple containers that might build up over long distances in a large array. Traditionally, the array problem has been approached by individual Monte Carlo analyses of explicit arrangements of single units and their surroundings. Deemphasizing specific configurations, the present technique takes advantage of low average density of fissionable material in typical storage arrays to separate neutron interactions that take place in the neutron's 'birth unit' from subsequent interactions in a dilute array. Numerous explicit Monte Carlo analyses show that array effects may be conservatively calculated by analyses that homogenize fissionable contents and depend only on the overall array shape, size, and reflective boundary

  19. Criticality safety of low-density storage arrays

    International Nuclear Information System (INIS)

    Bauer, T.H.

    1996-01-01

    This paper proposes a straightforward bounding method for the criticality safety analysis of fissionable materials configured into large arrays of standard containers. While criticality-safe storage limits have been well established for single containers, even under flooded conditions, it is also necessary to rule out any potential for criticality arising from neutronic interactions among multiple containers that might build up over long distances in a large array. Traditionally, the array problem has been approached by individual Monte Carlo analyses of explicit arrangements of single units and their surroundings. Deemphasizing specific configurations, the present technique takes advantage of low average density of fissionable material in typical storage arrays to separate neutron interactions that take place in the neutron's open-quotes birth unitclose quotes from subsequent interactions in a dilute array. Numerous explicit Monte Carlo analyses show that array effects may be conservatively calculated by analyses that homogenize fissionable contents and depend only on the overall array shape, size, and reflective boundary

  20. Levy-Lieb-Based Monte Carlo Study of the Dimensionality Behaviour of the Electronic Kinetic Functional

    Directory of Open Access Journals (Sweden)

    Seshaditya A.

    2017-06-01

    Full Text Available We consider a gas of interacting electrons in the limit of nearly uniform density and treat the one dimensional (1D, two dimensional (2D and three dimensional (3D cases. We focus on the determination of the correlation part of the kinetic functional by employing a Monte Carlo sampling technique of electrons in space based on an analytic derivation via the Levy-Lieb constrained search principle. Of particular interest is the question of the behaviour of the functional as one passes from 1D to 3D; according to the basic principles of Density Functional Theory (DFT the form of the universal functional should be independent of the dimensionality. However, in practice the straightforward use of current approximate functionals in different dimensions is problematic. Here, we show that going from the 3D to the 2D case the functional form is consistent (concave function but in 1D becomes convex; such a drastic difference is peculiar of 1D electron systems as it is for other quantities. Given the interesting behaviour of the functional, this study represents a basic first-principle approach to the problem and suggests further investigations using highly accurate (though expensive many-electron computational techniques, such as Quantum Monte Carlo.

  1. Thermodynamic properties of water in confined environments: a Monte Carlo study

    Science.gov (United States)

    Gladovic, Martin; Bren, Urban; Urbic, Tomaž

    2018-05-01

    Monte Carlo simulations of Mercedes-Benz water in a crowded environment were performed. The simulated systems are representative of both composite, porous or sintered materials and living cells with typical matrix packings. We studied the influence of overall temperature as well as the density and size of matrix particles on water density, particle distributions, hydrogen bond formation and thermodynamic quantities. Interestingly, temperature and space occupancy of matrix exhibit a similar effect on water properties following the competition between the kinetic and the potential energy of the system, whereby temperature increases the kinetic and matrix packing decreases the potential contribution. A novel thermodynamic decomposition approach was applied to gain insight into individual contributions of different types of inter-particle interactions. This decomposition proved to be useful and in good agreement with the total thermodynamic quantities especially at higher temperatures and matrix packings, where higher-order potential-energy mixing terms lose their importance.

  2. Monte Carlo approaches to light nuclei

    International Nuclear Information System (INIS)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of 16 O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs

  3. Monte Carlo approaches to light nuclei

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.

    1990-01-01

    Significant progress has been made recently in the application of Monte Carlo methods to the study of light nuclei. We review new Green's function Monte Carlo results for the alpha particle, Variational Monte Carlo studies of {sup 16}O, and methods for low-energy scattering and transitions. Through these calculations, a coherent picture of the structure and electromagnetic properties of light nuclei has arisen. In particular, we examine the effect of the three-nucleon interaction and the importance of exchange currents in a variety of experimentally measured properties, including form factors and capture cross sections. 29 refs., 7 figs.

  4. Quantum Monte Carlo calculations with chiral effective field theory interactions

    Energy Technology Data Exchange (ETDEWEB)

    Tews, Ingo

    2015-10-12

    The neutron-matter equation of state connects several physical systems over a wide density range, from cold atomic gases in the unitary limit at low densities, to neutron-rich nuclei at intermediate densities, up to neutron stars which reach supranuclear densities in their core. An accurate description of the neutron-matter equation of state is therefore crucial to describe these systems. To calculate the neutron-matter equation of state reliably, precise many-body methods in combination with a systematic theory for nuclear forces are needed. Chiral effective field theory (EFT) is such a theory. It provides a systematic framework for the description of low-energy hadronic interactions and enables calculations with controlled theoretical uncertainties. Chiral EFT makes use of a momentum-space expansion of nuclear forces based on the symmetries of Quantum Chromodynamics, which is the fundamental theory of strong interactions. In chiral EFT, the description of nuclear forces can be systematically improved by going to higher orders in the chiral expansion. On the other hand, continuum Quantum Monte Carlo (QMC) methods are among the most precise many-body methods available to study strongly interacting systems at finite densities. They treat the Schroedinger equation as a diffusion equation in imaginary time and project out the ground-state wave function of the system starting from a trial wave function by propagating the system in imaginary time. To perform this propagation, continuum QMC methods require as input local interactions. However, chiral EFT, which is naturally formulated in momentum space, contains several sources of nonlocality. In this Thesis, we show how to construct local chiral two-nucleon (NN) and three-nucleon (3N) interactions and discuss results of first QMC calculations for pure neutron systems. We have performed systematic auxiliary-field diffusion Monte Carlo (AFDMC) calculations for neutron matter using local chiral NN interactions. By

  5. Automatic variance reduction for Monte Carlo simulations via the local importance function transform

    International Nuclear Information System (INIS)

    Turner, S.A.

    1996-02-01

    The author derives a transformed transport problem that can be solved theoretically by analog Monte Carlo with zero variance. However, the Monte Carlo simulation of this transformed problem cannot be implemented in practice, so he develops a method for approximating it. The approximation to the zero variance method consists of replacing the continuous adjoint transport solution in the transformed transport problem by a piecewise continuous approximation containing local biasing parameters obtained from a deterministic calculation. He uses the transport and collision processes of the transformed problem to bias distance-to-collision and selection of post-collision energy groups and trajectories in a traditional Monte Carlo simulation of ''real'' particles. He refers to the resulting variance reduction method as the Local Importance Function Transform (LIFI) method. He demonstrates the efficiency of the LIFT method for several 3-D, linearly anisotropic scattering, one-group, and multigroup problems. In these problems the LIFT method is shown to be more efficient than the AVATAR scheme, which is one of the best variance reduction techniques currently available in a state-of-the-art Monte Carlo code. For most of the problems considered, the LIFT method produces higher figures of merit than AVATAR, even when the LIFT method is used as a ''black box''. There are some problems that cause trouble for most variance reduction techniques, and the LIFT method is no exception. For example, the author demonstrates that problems with voids, or low density regions, can cause a reduction in the efficiency of the LIFT method. However, the LIFT method still performs better than survival biasing and AVATAR in these difficult cases

  6. Simulation and the Monte Carlo method

    CERN Document Server

    Rubinstein, Reuven Y

    2016-01-01

    Simulation and the Monte Carlo Method, Third Edition reflects the latest developments in the field and presents a fully updated and comprehensive account of the major topics that have emerged in Monte Carlo simulation since the publication of the classic First Edition over more than a quarter of a century ago. While maintaining its accessible and intuitive approach, this revised edition features a wealth of up-to-date information that facilitates a deeper understanding of problem solving across a wide array of subject areas, such as engineering, statistics, computer science, mathematics, and the physical and life sciences. The book begins with a modernized introduction that addresses the basic concepts of probability, Markov processes, and convex optimization. Subsequent chapters discuss the dramatic changes that have occurred in the field of the Monte Carlo method, with coverage of many modern topics including: Markov Chain Monte Carlo, variance reduction techniques such as the transform likelihood ratio...

  7. Evaluation of three Monte Carlo estimation schemes for flux at a point

    International Nuclear Information System (INIS)

    Kalli, H.J.; Cashwell, E.D.

    1977-09-01

    Three Monte Carlo estimation schemes were studied to avoid the difficulties caused by the (1/r 2 ) singularity in the expression of the normal next-event estimator (NEE) for the flux at a point. A new, fast, once-more collided flux estimator (OMCFE) scheme, based on a very simple probability density function (p.d.f.) of the distance to collision in the selection of the intermediate collision points, is proposed. This kind of p.d.f. of the collision distance is used in two nonanalog schemes using the NEE. In these two schemes, which have principal similarities to some schemes proposed earlier in the literature, the (1/r 2 ) singularity is canceled by incorporating the singularity into the p.d.f. of the collision points. This is achieved by playing a suitable nonanalog game in the neighborhood of the detector points. The three schemes were tested in a monoenergetic, homogeneous infinite-medium problem, then were evaluated in a point-cross-section problem by using the Monte Carlo code MCNG. 10 figures

  8. Monte Carlo modelling of impurity ion transport for a limiter source/sink

    International Nuclear Information System (INIS)

    Stangeby, P.C.; Farrell, C.; Hoskins, S.; Wood, L.

    1988-01-01

    In relating the impurity influx Φ I (0) (atoms per second) into a plasma from the edge to the central impurity ion density n I (0) (ions·m -3 ), it is necessary to know the value of τ I SOL , the average dwell time of impurity ions in the scrape-off layer. It is usually assumed that τ I SOL =L c /c s , the hydrogenic dwell time, where L c is the limiter connection length and c s is the hydrogenic ion acoustic speed. Monte Carlo ion transport results are reported here which show that, for a wall (uniform) influx, τ I SOL is longer than L c /c s , while for a limiter influx it is shorter. Thus for a limiter influx n I (0) is predicted to be smaller than the reference value. Impurities released from the limiter form ever large 'clouds' of successively higher ionization stages. These are reproduced by the Monte Carlo code as are the cloud shapes for a localized impurity injection far from the limiter. (author). 23 refs, 18 figs, 6 tabs

  9. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J.E. [Delft University of Technology, Interfaculty Reactor Institute, Delft (Netherlands)

    2000-07-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  10. Lecture 1. Monte Carlo basics. Lecture 2. Adjoint Monte Carlo. Lecture 3. Coupled Forward-Adjoint calculations

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    2000-01-01

    The Monte Carlo method is a statistical method to solve mathematical and physical problems using random numbers. The principle of the methods will be demonstrated for a simple mathematical problem and for neutron transport. Various types of estimators will be discussed, as well as generally applied variance reduction methods like splitting, Russian roulette and importance biasing. The theoretical formulation for solving eigenvalue problems for multiplying systems will be shown. Some reflections will be given about the applicability of the Monte Carlo method, its limitations and its future prospects for reactor physics calculations. Adjoint Monte Carlo is a Monte Carlo game to solve the adjoint neutron (or photon) transport equation. The adjoint transport equation can be interpreted in terms of simulating histories of artificial particles, which show properties of neutrons that move backwards in history. These particles will start their history at the detector from which the response must be estimated and give a contribution to the estimated quantity when they hit or pass through the neutron source. Application to multigroup transport formulation will be demonstrated Possible implementation for the continuous energy case will be outlined. The inherent advantages and disadvantages of the method will be discussed. The Midway Monte Carlo method will be presented for calculating a detector response due to a (neutron or photon) source. A derivation will be given of the basic formula for the Midway Monte Carlo method The black absorber technique, allowing for a cutoff of particle histories when reaching the midway surface in one of the calculations will be derived. An extension of the theory to coupled neutron-photon problems is given. The method will be demonstrated for an oil well logging problem, comprising a neutron source in a borehole and photon detectors to register the photons generated by inelastic neutron scattering. (author)

  11. Neutron emission spectra and level density of hot rotating 132Sn

    International Nuclear Information System (INIS)

    Aggarwal, Mamta

    2008-01-01

    The neutron emission spectrum of the highly excited compound nuclear system 132 Sn is investigated at high spin. The doubly magic nucleus 132 Sn undergoes a shape transition at high angular momentum which affects the nuclear level density and neutron emission probability considerably. The interplay of temperature, shape, deformation and rotational degrees of freedom and their influence on neutron emission is emphasized. We predict an enhancement of nucleonic emission at those spins where the nucleus suffers a transition from a spherical to deformed shape. (author)

  12. Studies on the inhomogeneous core density of a fluidized bed nuclear reactor

    Energy Technology Data Exchange (ETDEWEB)

    Van der Hagen, T.H.J.J.; Van Dam, H.; Hoogenboom, J.E.; Khotylev, V.A. [Delft Univ. of Technology (Netherlands). Interfaculty Reactor Inst.; Harteveld, W.; Mudde, R.F.

    1997-12-31

    Results are reported on the expected time dependent core density profile of a fluidized-bed nuclear fission reactor. Core densities have been measured in a test facility by the gamma-transmission technique. Bubble and particle-cluster sizes, positions, velocities and frequencies could be determined. Neutronic studies have been performed on the influence of core voids on reactivity using Monte-Carlo and neutron-transport codes. Fuel-particle importance has been determined. Point-kinetic parameters have been calculated for linking reactivity perturbations to power fluctuations. (author)

  13. Effect of the surface charge discretization on electric double layers. A Monte Carlo simulation study

    OpenAIRE

    Madurga Díez, Sergio; Martín-Molina, Alberto; Vilaseca i Font, Eudald; Mas i Pujadas, Francesc; Quesada-Pérez, Manuel

    2007-01-01

    The structure of the electric double layer in contact with discrete and continuously charged planar surfaces is studied within the framework of the primitive model through Monte Carlo simulations. Three different discretization models are considered together with the case of uniform distribution. The effect of discreteness is analyzed in terms of charge density profiles. For point surface groups,a complete equivalence with the situation of uniformly distributed charge is found if profiles are...

  14. Lipoprotein Lipase and PPAR Alpha Gene Polymorphisms, Increased Very-Low-Density Lipoprotein Levels, and Decreased High-Density Lipoprotein Levels as Risk Markers for the Development of Visceral Leishmaniasis by Leishmania infantum

    Directory of Open Access Journals (Sweden)

    Márcia Dias Teixeira Carvalho

    2014-01-01

    Full Text Available In visceral leishmaniasis (VL endemic areas, a minority of infected individuals progress to disease since most of them develop protective immunity. Therefore, we investigated the risk markers of VL within nonimmune sector. Analyzing infected symptomatic and, asymptomatic, and noninfected individuals, VL patients presented with reduced high-density lipoprotein cholesterol (HDL-C, elevated triacylglycerol (TAG, and elevated very-low-density lipoprotein cholesterol (VLDL-C levels. A polymorphism analysis of the lipoprotein lipase (LPL gene using HindIII restriction digestion (N = 156 samples (H+ = the presence and H− = the absence of mutation revealed an increased adjusted odds ratio (OR of VL versus noninfected individuals when the H+/H+ was compared with the H−/H− genotype (OR = 21.3; 95% CI = 2.32–3335.3; P = 0.003. The H+/H+ genotype and the H+ allele were associated with elevated VLDL-C and TAG levels (P < 0.05 and reduced HDL-C levels (P < 0.05. An analysis of the L162V polymorphism in the peroxisome proliferator-activated receptor alpha (PPARα gene (n = 248 revealed an increased adjusted OR when the Leu/Val was compared with the Leu/Leu genotype (OR = 8.77; 95% CI = 1.41–78.70; P = 0.014. High TAG (P = 0.021 and VLDL-C (P = 0.023 levels were associated with susceptibility to VL, whereas low HDL (P = 0.006 levels with resistance to infection. The mutated LPL and the PPARα Leu/Val genotypes may be considered risk markers for the development of VL.

  15. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency and Professor at the IUSS School for Advanced Studies in Pavia will speak about his work with Carlo Rubbia. Finally, Hans Joachim Sch...

  16. CERN honours Carlo Rubbia

    CERN Multimedia

    2009-01-01

    On 7 April CERN will be holding a symposium to mark the 75th birthday of Carlo Rubbia, who shared the 1984 Nobel Prize for Physics with Simon van der Meer for contributions to the discovery of the W and Z bosons, carriers of the weak interaction. Following a presentation by Rolf Heuer, lectures will be given by eminent speakers on areas of science to which Carlo Rubbia has made decisive contributions. Michel Spiro, Director of the French National Institute of Nuclear and Particle Physics (IN2P3) of the CNRS, Lyn Evans, sLHC Project Leader, and Alan Astbury of the TRIUMF Laboratory will talk about the physics of the weak interaction and the discovery of the W and Z bosons. Former CERN Director-General Herwig Schopper will lecture on CERN’s accelerators from LEP to the LHC. Giovanni Bignami, former President of the Italian Space Agency, will speak about his work with Carlo Rubbia. Finally, Hans Joachim Schellnhuber of the Potsdam Institute for Climate Research and Sven Kul...

  17. Monte Carlo Transport for Electron Thermal Transport

    Science.gov (United States)

    Chenhall, Jeffrey; Cao, Duc; Moses, Gregory

    2015-11-01

    The iSNB (implicit Schurtz Nicolai Busquet multigroup electron thermal transport method of Cao et al. is adapted into a Monte Carlo transport method in order to better model the effects of non-local behavior. The end goal is a hybrid transport-diffusion method that combines Monte Carlo Transport with a discrete diffusion Monte Carlo (DDMC). The hybrid method will combine the efficiency of a diffusion method in short mean free path regions with the accuracy of a transport method in long mean free path regions. The Monte Carlo nature of the approach allows the algorithm to be massively parallelized. Work to date on the method will be presented. This work was supported by Sandia National Laboratory - Albuquerque and the University of Rochester Laboratory for Laser Energetics.

  18. Tillering dynamics of Tanzania guinea grass under nitrogen levels and plant densities - doi: 10.4025/actascianimsci.v34i4.13382

    Directory of Open Access Journals (Sweden)

    Manoel Eduardo Rozalino Santos

    2012-10-01

    Full Text Available This study evaluated the influence of nitrogen levels (N and plant density (D on the tillering dynamics of Tanzania guinea grass (Panicum maximum Jacq.. Treatments were arranged in a completely randomized block design with 12 treatments and two replicates in a factorial scheme (4 × 3 with four levels of N (0, 80, 160 or 320 kg ha-1 N and three plant densities (9, 25, and 49 plant m-². Higher number of tillers was observed in the treatment with 9 plants m-² and under higher levels of N, especially in the second and third generations. Still, the N influenced quadratically the appearance rate of basal and total tillers, which were also affected by plant density and interaction N × D. However, the appearance rate of aerial tiller was not influenced by factors evaluated. The mortality rate of total tiller was influenced quadratically by the nitrogen levels and plant densities. The mortality rate of basal tiller responded quadratically to plant density, whereas the mortality rate of aerial tiller increased linearly with fertilization. Pastures with low or intermediate densities fertilized with nitrogen, presented a more intense pattern of tiller renewal.

  19. Measurement of excited states in 71Ge via (p, nγ) reaction and density of discrete levels in 71Ge

    International Nuclear Information System (INIS)

    Razavi, R.; Kakavand, T.; Behkami, A.N.

    2008-01-01

    In all statistical theories the nuclear level density is the most characteristic quantity and plays an essential role in the study of nuclear structure. In this work, additional experimental information about existing level structure of 71 Ge have been provided through the (p, nγ) reaction and then determined nuclear level density parameters of the Bethe formula and constant temperature model for 71 Ge

  20. A random-walk algorithm for modeling lithospheric density and the role of body forces in the evolution of the Midcontinent Rift

    Science.gov (United States)

    Levandowski, William Brower; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.

    2015-01-01

    This paper develops a Monte Carlo algorithm for extracting three-dimensional lithospheric density models from geophysical data. Empirical scaling relationships between velocity and density create a 3D starting density model, which is then iteratively refined until it reproduces observed gravity and topography. This approach permits deviations from uniform crustal velocity-density scaling, which provide insight into crustal lithology and prevent spurious mapping of crustal anomalies into the mantle.

  1. Monte Carlo study of one-dimensional confined fluids with Gay-Berne intermolecular potential

    Science.gov (United States)

    Moradi, M.; Hashemi, S.

    2011-11-01

    The thermodynamic quantities of a one dimensional system of particles with Gay-Berne model potential confined between walls have been obtained by means of Monte Carlo computer simulations. For a number of temperatures, the systems were considered and their density profiles, order parameter, pressure, configurational temperature and average potential energy per particle are reported. The results show that by decreasing the temperature, the soft particles become more ordered and they align to the walls and also they don't show any tendency to be near the walls at very low temperatures. We have also changed the structure of the walls by embedding soft ellipses in them, this change increases the total density near the wall whereas, increasing or decreasing the order parameter depend on the angle of embedded ellipses.

  2. Generalized hybrid Monte Carlo - CMFD methods for fission source convergence

    International Nuclear Information System (INIS)

    Wolters, Emily R.; Larsen, Edward W.; Martin, William R.

    2011-01-01

    In this paper, we generalize the recently published 'CMFD-Accelerated Monte Carlo' method and present two new methods that reduce the statistical error in CMFD-Accelerated Monte Carlo. The CMFD-Accelerated Monte Carlo method uses Monte Carlo to estimate nonlinear functionals used in low-order CMFD equations for the eigenfunction and eigenvalue. The Monte Carlo fission source is then modified to match the resulting CMFD fission source in a 'feedback' procedure. The two proposed methods differ from CMFD-Accelerated Monte Carlo in the definition of the required nonlinear functionals, but they have identical CMFD equations. The proposed methods are compared with CMFD-Accelerated Monte Carlo on a high dominance ratio test problem. All hybrid methods converge the Monte Carlo fission source almost immediately, leading to a large reduction in the number of inactive cycles required. The proposed methods stabilize the fission source more efficiently than CMFD-Accelerated Monte Carlo, leading to a reduction in the number of active cycles required. Finally, as in CMFD-Accelerated Monte Carlo, the apparent variance of the eigenfunction is approximately equal to the real variance, so the real error is well-estimated from a single calculation. This is an advantage over standard Monte Carlo, in which the real error can be underestimated due to inter-cycle correlation. (author)

  3. Monte Carlo simulations of plutonium gamma-ray spectra

    International Nuclear Information System (INIS)

    Koenig, Z.M.; Carlson, J.B.; Wang, Tzu-Fang; Ruhter, W.D.

    1993-01-01

    Monte Carlo calculations were investigated as a means of simulating the gamma-ray spectra of Pu. These simulated spectra will be used to develop and evaluate gamma-ray analysis techniques for various nondestructive measurements. Simulated spectra of calculational standards can be used for code intercomparisons, to understand systematic biases and to estimate minimum detection levels of existing and proposed nondestructive analysis instruments. The capability to simulate gamma-ray spectra from HPGe detectors could significantly reduce the costs of preparing large numbers of real reference materials. MCNP was used for the Monte Carlo transport of the photons. Results from the MCNP calculations were folded in with a detector response function for a realistic spectrum. Plutonium spectrum peaks were produced with Lorentzian shapes, for the x-rays, and Gaussian distributions. The MGA code determined the Pu isotopes and specific power of this calculated spectrum and compared it to a similar analysis on a measured spectrum

  4. New density estimation methods for charged particle beams with applications to microbunching instability

    International Nuclear Information System (INIS)

    Terzic, B.; Bassi, G.

    2011-01-01

    In this paper we discuss representations of charge particle densities in particle-in-cell simulations, analyze the sources and profiles of the intrinsic numerical noise, and present efficient methods for their removal. We devise two alternative estimation methods for charged particle distribution which represent significant improvement over the Monte Carlo cosine expansion used in the 2D code of Bassi et al. (G. Bassi, J.A. Ellison, K. Heinemann and R. Warnock Phys. Rev. ST Accel. Beams 12 080704 (2009)G. Bassi and B. Terzic, in Proceedings of the 23rd Particle Accelerator Conference, Vancouver, Canada, 2009 (IEEE, Piscataway, NJ, 2009), TH5PFP043), designed to simulate coherent synchrotron radiation (CSR) in charged particle beams. The improvement is achieved by employing an alternative beam density estimation to the Monte Carlo cosine expansion. The representation is first binned onto a finite grid, after which two grid-based methods are employed to approximate particle distributions: (i) truncated fast cosine transform; and (ii) thresholded wavelet transform (TWT). We demonstrate that these alternative methods represent a staggering upgrade over the original Monte Carlo cosine expansion in terms of efficiency, while the TWT approximation also provides an appreciable improvement in accuracy. The improvement in accuracy comes from a judicious removal of the numerical noise enabled by the wavelet formulation. The TWT method is then integrated into the CSR code (G. Bassi, J.A. Ellison, K. Heinemann and R. Warnock Phys. Rev. ST Accel. Beams 12 080704 (2009)), and benchmarked against the original version. We show that the new density estimation method provides a superior performance in terms of efficiency and spatial resolution, thus enabling high-fidelity simulations of CSR effects, including microbunching instability.

  5. Is Monte Carlo embarrassingly parallel?

    Energy Technology Data Exchange (ETDEWEB)

    Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Delft Nuclear Consultancy, IJsselzoom 2, 2902 LB Capelle aan den IJssel (Netherlands)

    2012-07-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  6. Is Monte Carlo embarrassingly parallel?

    International Nuclear Information System (INIS)

    Hoogenboom, J. E.

    2012-01-01

    Monte Carlo is often stated as being embarrassingly parallel. However, running a Monte Carlo calculation, especially a reactor criticality calculation, in parallel using tens of processors shows a serious limitation in speedup and the execution time may even increase beyond a certain number of processors. In this paper the main causes of the loss of efficiency when using many processors are analyzed using a simple Monte Carlo program for criticality. The basic mechanism for parallel execution is MPI. One of the bottlenecks turn out to be the rendez-vous points in the parallel calculation used for synchronization and exchange of data between processors. This happens at least at the end of each cycle for fission source generation in order to collect the full fission source distribution for the next cycle and to estimate the effective multiplication factor, which is not only part of the requested results, but also input to the next cycle for population control. Basic improvements to overcome this limitation are suggested and tested. Also other time losses in the parallel calculation are identified. Moreover, the threading mechanism, which allows the parallel execution of tasks based on shared memory using OpenMP, is analyzed in detail. Recommendations are given to get the maximum efficiency out of a parallel Monte Carlo calculation. (authors)

  7. A Monte Carlo technique for signal level detection in implanted intracranial pressure monitoring.

    Science.gov (United States)

    Avent, R K; Charlton, J D; Nagle, H T; Johnson, R N

    1987-01-01

    Statistical monitoring techniques like CUSUM, Trigg's tracking signal and EMP filtering have a major advantage over more recent techniques, such as Kalman filtering, because of their inherent simplicity. In many biomedical applications, such as electronic implantable devices, these simpler techniques have greater utility because of the reduced requirements on power, logic complexity and sampling speed. The determination of signal means using some of the earlier techniques are reviewed in this paper, and a new Monte Carlo based method with greater capability to sparsely sample a waveform and obtain an accurate mean value is presented. This technique may find widespread use as a trend detection method when reduced power consumption is a requirement.

  8. Kinetic Monte Carlo Simulation of Cation Diffusion in Low-K Ceramics

    Science.gov (United States)

    Good, Brian

    2013-01-01

    Low thermal conductivity (low-K) ceramic materials are of interest to the aerospace community for use as the thermal barrier component of coating systems for turbine engine components. In particular, zirconia-based materials exhibit both low thermal conductivity and structural stability at high temperature, making them suitable for such applications. Because creep is one of the potential failure modes, and because diffusion is a mechanism by which creep takes place, we have performed computer simulations of cation diffusion in a variety of zirconia-based low-K materials. The kinetic Monte Carlo simulation method is an alternative to the more widely known molecular dynamics (MD) method. It is designed to study "infrequent-event" processes, such as diffusion, for which MD simulation can be highly inefficient. We describe the results of kinetic Monte Carlo computer simulations of cation diffusion in several zirconia-based materials, specifically, zirconia doped with Y, Gd, Nb and Yb. Diffusion paths are identified, and migration energy barriers are obtained from density functional calculations and from the literature. We present results on the temperature dependence of the diffusivity, and on the effects of the presence of oxygen vacancies in cation diffusion barrier complexes as well.

  9. SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations

    Science.gov (United States)

    Baes, M.; Camps, P.

    2015-09-01

    The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.

  10. Mean field simulation for Monte Carlo integration

    CERN Document Server

    Del Moral, Pierre

    2013-01-01

    In the last three decades, there has been a dramatic increase in the use of interacting particle methods as a powerful tool in real-world applications of Monte Carlo simulation in computational physics, population biology, computer sciences, and statistical machine learning. Ideally suited to parallel and distributed computation, these advanced particle algorithms include nonlinear interacting jump diffusions; quantum, diffusion, and resampled Monte Carlo methods; Feynman-Kac particle models; genetic and evolutionary algorithms; sequential Monte Carlo methods; adaptive and interacting Marko

  11. The Hagedorn spectrum, nuclear level densities and first order phase transitions

    International Nuclear Information System (INIS)

    Moretto, Luciano G.; Larsen, A. C.; Guttormsen, M.; Siem, S.

    2015-01-01

    An exponential mass spectrum, like the Hagedorn spectrum, with slope 1/T H was interpreted as fixing an upper limiting temperature T H that the system can achieve. However, thermodynamically, such spectrum indicates a 1 st order phase transition at a fixed temperature T H . A much lower energy example is the log linear level nuclear density below the neutron binding energy that prevails throughout the nuclear chart. We show that, for non-magic nuclei, such linearity implies a 1 st order phase transition from the pairing superfluid to an ideal gas of quasi particles

  12. An update on the BQCD Hybrid Monte Carlo program

    Directory of Open Access Journals (Sweden)

    Haar Taylor Ryan

    2018-01-01

    Full Text Available We present an update of BQCD, our Hybrid Monte Carlo program for simulating lattice QCD. BQCD is one of the main production codes of the QCDSF collaboration and is used by CSSM and in some Japanese finite temperature and finite density projects. Since the first publication of the code at Lattice 2010 the program has been extended in various ways. New features of the code include: dynamical QED, action modification in order to compute matrix elements by using Feynman-Hellman theory, more trace measurements (like Tr(D-n for K, cSW and chemical potential reweighting, a more flexible integration scheme, polynomial filtering, term-splitting for RHMC, and a portable implementation of performance critical parts employing SIMD.

  13. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    A new variational variance reduction (VVR) method for Monte Carlo criticality calculations was developed. This method employs (a) a variational functional that is more accurate than the standard direct functional, (b) a representation of the deterministically obtained adjoint flux that is especially accurate for optically thick problems with high scattering ratios, and (c) estimates of the forward flux obtained by Monte Carlo. The VVR method requires no nonanalog Monte Carlo biasing, but it may be used in conjunction with Monte Carlo biasing schemes. Some results are presented from a class of criticality calculations involving alternating arrays of fuel and moderator regions

  14. Radiation Modeling with Direct Simulation Monte Carlo

    Science.gov (United States)

    Carlson, Ann B.; Hassan, H. A.

    1991-01-01

    Improvements in the modeling of radiation in low density shock waves with direct simulation Monte Carlo (DSMC) are the subject of this study. A new scheme to determine the relaxation collision numbers for excitation of electronic states is proposed. This scheme attempts to move the DSMC programs toward a more detailed modeling of the physics and more reliance on available rate data. The new method is compared with the current modeling technique and both techniques are compared with available experimental data. The differences in the results are evaluated. The test case is based on experimental measurements from the AVCO-Everett Research Laboratory electric arc-driven shock tube of a normal shock wave in air at 10 km/s and .1 Torr. The new method agrees with the available data as well as the results from the earlier scheme and is more easily extrapolated to di erent ow conditions.

  15. Detailed balance method for chemical potential determination in Monte Carlo and molecular dynamics simulations

    International Nuclear Information System (INIS)

    Fay, P.J.; Ray, J.R.; Wolf, R.J.

    1994-01-01

    We present a new, nondestructive, method for determining chemical potentials in Monte Carlo and molecular dynamics simulations. The method estimates a value for the chemical potential such that one has a balance between fictitious successful creation and destruction trials in which the Monte Carlo method is used to determine success or failure of the creation/destruction attempts; we thus call the method a detailed balance method. The method allows one to obtain estimates of the chemical potential for a given species in any closed ensemble simulation; the closed ensemble is paired with a ''natural'' open ensemble for the purpose of obtaining creation and destruction probabilities. We present results for the Lennard-Jones system and also for an embedded atom model of liquid palladium, and compare to previous results in the literature for these two systems. We are able to obtain an accurate estimate of the chemical potential for the Lennard-Jones system at higher densities than reported in the literature

  16. Monte Carlo Solutions for Blind Phase Noise Estimation

    Directory of Open Access Journals (Sweden)

    Çırpan Hakan

    2009-01-01

    Full Text Available This paper investigates the use of Monte Carlo sampling methods for phase noise estimation on additive white Gaussian noise (AWGN channels. The main contributions of the paper are (i the development of a Monte Carlo framework for phase noise estimation, with special attention to sequential importance sampling and Rao-Blackwellization, (ii the interpretation of existing Monte Carlo solutions within this generic framework, and (iii the derivation of a novel phase noise estimator. Contrary to the ad hoc phase noise estimators that have been proposed in the past, the estimators considered in this paper are derived from solid probabilistic and performance-determining arguments. Computer simulations demonstrate that, on one hand, the Monte Carlo phase noise estimators outperform the existing estimators and, on the other hand, our newly proposed solution exhibits a lower complexity than the existing Monte Carlo solutions.

  17. KAMCCO, a reactor physics Monte Carlo neutron transport code

    International Nuclear Information System (INIS)

    Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.

    1976-06-01

    KAMCCO is a 3-dimensional reactor Monte Carlo code for fast neutron physics problems. Two options are available for the solution of 1) the inhomogeneous time-dependent neutron transport equation (census time scheme), and 2) the homogeneous static neutron transport equation (generation cycle scheme). The user defines the desired output, e.g. estimates of reaction rates or neutron flux integrated over specified volumes in phase space and time intervals. Such primary quantities can be arbitrarily combined, also ratios of these quantities can be estimated with their errors. The Monte Carlo techniques are mostly analogue (exceptions: Importance sampling for collision processes, ELP/MELP, Russian roulette and splitting). Estimates are obtained from the collision and track length estimators. Elastic scattering takes into account first order anisotropy in the center of mass system. Inelastic scattering is processed via the evaporation model or via the excitation of discrete levels. For the calculation of cross sections, the energy is treated as a continuous variable. They are computed by a) linear interpolation, b) from optionally Doppler broadened single level Breit-Wigner resonances or c) from probability tables (in the region of statistically distributed resonances). (orig.) [de

  18. Monte Carlo based diffusion coefficients for LMFBR analysis

    International Nuclear Information System (INIS)

    Van Rooijen, Willem F.G.; Takeda, Toshikazu; Hazama, Taira

    2010-01-01

    A method based on Monte Carlo calculations is developed to estimate the diffusion coefficient of unit cells. The method uses a geometrical model similar to that used in lattice theory, but does not use the assumption of a separable fundamental mode used in lattice theory. The method uses standard Monte Carlo flux and current tallies, and the continuous energy Monte Carlo code MVP was used without modifications. Four models are presented to derive the diffusion coefficient from tally results of flux and partial currents. In this paper the method is applied to the calculation of a plate cell of the fast-spectrum critical facility ZEBRA. Conventional calculations of the diffusion coefficient diverge in the presence of planar voids in the lattice, but our Monte Carlo method can treat this situation without any problem. The Monte Carlo method was used to investigate the influence of geometrical modeling as well as the directional dependence of the diffusion coefficient. The method can be used to estimate the diffusion coefficient of complicated unit cells, the limitation being the capabilities of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained of the Monte Carlo code. The method will be used in the future to confirm results for the diffusion coefficient obtained with deterministic codes. (author)

  19. Gold nanocrystal labeling allows low-density lipoprotein imaging from the subcellular to macroscopic level

    NARCIS (Netherlands)

    Allijn, Iris E.; Leong, Wei; Tang, Jun; Gianella, Anita; Mieszawska, Aneta J.; Fay, Francois; Ma, Ge; Russell, Stewart; Callo, Catherine B.; Gordon, Ronald E.; Korkmaz, Emine; Post, Jan Andries; Zhao, Yiming; Gerritsen, Hans C.; Thran, Axel; Proksa, Roland; Daerr, Heiner; Storm, Gert; Fuster, Valentin; Fisher, Edward A.; Fayad, Zahi A.; Mulder, Willem J. M.; Cormode, David P.

    2013-01-01

    Low-density lipoprotein (LDL) plays a critical role in cholesterol transport and is closely linked to the progression of several diseases. This motivates the development of methods to study LDL behavior from the microscopic to whole-body level. We have developed an approach to efficiently load LDL

  20. A Monte Carlo approach to constraining uncertainties in modelled downhole gravity gradiometry applications

    Science.gov (United States)

    Matthews, Samuel J.; O'Neill, Craig; Lackie, Mark A.

    2017-06-01

    Gravity gradiometry has a long legacy, with airborne/marine applications as well as surface applications receiving renewed recent interest. Recent instrumental advances has led to the emergence of downhole gravity gradiometry applications that have the potential for greater resolving power than borehole gravity alone. This has promise in both the petroleum and geosequestration industries; however, the effect of inherent uncertainties in the ability of downhole gravity gradiometry to resolve a subsurface signal is unknown. Here, we utilise the open source modelling package, Fatiando a Terra, to model both the gravity and gravity gradiometry responses of a subsurface body. We use a Monte Carlo approach to vary the geological structure and reference densities of the model within preset distributions. We then perform 100 000 simulations to constrain the mean response of the buried body as well as uncertainties in these results. We varied our modelled borehole to be either centred on the anomaly, adjacent to the anomaly (in the x-direction), and 2500 m distant to the anomaly (also in the x-direction). We demonstrate that gravity gradiometry is able to resolve a reservoir-scale modelled subsurface density variation up to 2500 m away, and that certain gravity gradient components (Gzz, Gxz, and Gxx) are particularly sensitive to this variation in gravity/gradiometry above the level of uncertainty in the model. The responses provided by downhole gravity gradiometry modelling clearly demonstrate a technique that can be utilised in determining a buried density contrast, which will be of particular use in the emerging industry of CO2 geosequestration. The results also provide a strong benchmark for the development of newly emerging prototype downhole gravity gradiometers.

  1. The Influence of Decreased Levels of High Density Lipoprotein ...

    African Journals Online (AJOL)

    very low density lipoprotein cholesterol, and triglyceride were assayed. ... Abiodun and Gwarzo: Association of high density lipoprotein cholesterol with haemolysis in sickle cell disease ... analyses were carried out to determine the correlation.

  2. Absolute density measurements in the middle atmosphere

    Directory of Open Access Journals (Sweden)

    M. Rapp

    2001-05-01

    Full Text Available In the last ten years a total of 25 sounding rockets employing ionization gauges have been launched at high latitudes ( ~ 70° N to measure total atmospheric density and its small scale fluctuations in an altitude range between 70 and 110 km. While the determination of small scale fluctuations is unambiguous, the total density analysis has been complicated in the past by aerodynamical disturbances leading to densities inside the sensor which are enhanced compared to atmospheric values. Here, we present the results of both Monte Carlo simulations and wind tunnel measurements to quantify this aerodynamical effect. The comparison of the resulting ‘ram-factor’ profiles with empirically determined density ratios of ionization gauge measurements and falling sphere measurements provides excellent agreement. This demonstrates both the need, but also the possibility, to correct aerodynamical influences on measurements from sounding rockets. We have determined a total of 20 density profiles of the mesosphere-lower-thermosphere (MLT region. Grouping these profiles according to season, a listing of mean density profiles is included in the paper. A comparison with density profiles taken from the reference atmospheres CIRA86 and MSIS90 results in differences of up to 40%. This reflects that current reference atmospheres are a significant potential error source for the determination of mixing ratios of, for example, trace gas constituents in the MLT region.Key words. Middle atmosphere (composition and chemistry; pressure, density, and temperature; instruments and techniques

  3. Absolute density measurements in the middle atmosphere

    Directory of Open Access Journals (Sweden)

    M. Rapp

    Full Text Available In the last ten years a total of 25 sounding rockets employing ionization gauges have been launched at high latitudes ( ~ 70° N to measure total atmospheric density and its small scale fluctuations in an altitude range between 70 and 110 km. While the determination of small scale fluctuations is unambiguous, the total density analysis has been complicated in the past by aerodynamical disturbances leading to densities inside the sensor which are enhanced compared to atmospheric values. Here, we present the results of both Monte Carlo simulations and wind tunnel measurements to quantify this aerodynamical effect. The comparison of the resulting ‘ram-factor’ profiles with empirically determined density ratios of ionization gauge measurements and falling sphere measurements provides excellent agreement. This demonstrates both the need, but also the possibility, to correct aerodynamical influences on measurements from sounding rockets. We have determined a total of 20 density profiles of the mesosphere-lower-thermosphere (MLT region. Grouping these profiles according to season, a listing of mean density profiles is included in the paper. A comparison with density profiles taken from the reference atmospheres CIRA86 and MSIS90 results in differences of up to 40%. This reflects that current reference atmospheres are a significant potential error source for the determination of mixing ratios of, for example, trace gas constituents in the MLT region.

    Key words. Middle atmosphere (composition and chemistry; pressure, density, and temperature; instruments and techniques

  4. Results of monte Carlo calibrations of a low energy germanium detector

    International Nuclear Information System (INIS)

    Brettner-Messler, R.; Brettner-Messler, R.; Maringer, F.J.

    2006-01-01

    Normally, measurements of the peak efficiency of a gamma ray detector are performed with calibrated samples which are prepared to match the measured ones in all important characteristics like its volume, chemical composition and density. Another way to determine the peak efficiency is to calculate it with special monte Carlo programs. In principle the program 'Pencyl' from the source code 'P.E.N.E.L.O.P.E. 2003' can be used for peak efficiency calibration of a cylinder symmetric detector however exact data for the geometries and the materials is needed. The interpretation of the simulation results is not clear but we found a way to convert the data into values which can be compared to our measurement results. It is possible to find other simulation parameters which perform the same or better results. Further improvements can be expected by longer simulation times and more simulations in the questionable ranges of densities and filling heights. (N.C.)

  5. Monte Carlo studies of the Portuguese gamma irradiation facility. The irradiator geometry and its influence on process parameters

    International Nuclear Information System (INIS)

    Oliveira, C.; Ferreira, L.; Salgado, J.

    2001-01-01

    The paper describes a Monte Carlo study of dose distributions, minimum dose and uniformity ratio for the Portuguese Gamma Irradiation Facility. These process parameters are calculated using the MCNP code for several irradiator geometries. The comparison of the simulated results with the experimental results carried out using Amber Perspex dosimeters in a routine process of the gamma facility for a given material composition and density reveals good agreement. The results already obtained allow to conclude that the dose uniformity is not very sensitive to the irradiator geometry for density value ρ = 0.1 and for a dynamic process. (orig.)

  6. Computer system for Monte Carlo experimentation

    International Nuclear Information System (INIS)

    Grier, D.A.

    1986-01-01

    A new computer system for Monte Carlo Experimentation is presented. The new system speeds and simplifies the process of coding and preparing a Monte Carlo Experiment; it also encourages the proper design of Monte Carlo Experiments, and the careful analysis of the experimental results. A new functional language is the core of this system. Monte Carlo Experiments, and their experimental designs, are programmed in this new language; those programs are compiled into Fortran output. The Fortran output is then compiled and executed. The experimental results are analyzed with a standard statistics package such as Si, Isp, or Minitab or with a user-supplied program. Both the experimental results and the experimental design may be directly loaded into the workspace of those packages. The new functional language frees programmers from many of the details of programming an experiment. Experimental designs such as factorial, fractional factorial, or latin square are easily described by the control structures and expressions of the language. Specific mathematical modes are generated by the routines of the language

  7. Random Numbers and Monte Carlo Methods

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many-body problems often involve the calculation of integrals of very high dimension which cannot be treated by standard methods. For the calculation of thermodynamic averages Monte Carlo methods are very useful which sample the integration volume at randomly chosen points. After summarizing some basic statistics, we discuss algorithms for the generation of pseudo-random numbers with given probability distribution which are essential for all Monte Carlo methods. We show how the efficiency of Monte Carlo integration can be improved by sampling preferentially the important configurations. Finally the famous Metropolis algorithm is applied to classical many-particle systems. Computer experiments visualize the central limit theorem and apply the Metropolis method to the traveling salesman problem.

  8. Average Nuclear Level Densities and Radiative Strength Functions in 56,57FE from Primary (Gamma)-Ray Spectra

    International Nuclear Information System (INIS)

    Tavukcu, E.; Becker, J.A.; Bernstein, L.A.; Garrett, P.E.; Guttormsen, M.; Mitchell, G.E.; Rekstad, J.; Schiller, A.; Siem, S.; Voinov, A.; Younes, W.

    2002-01-01

    An experimental primary γ-ray spectrum vs. excitation-energy bin (P(E x , E γ ) matrix) in a light-ion reaction is obtained for 56,57 Fe isotopes using a subtraction method. By factorizing the P(E x , E γ ) matrix according to the Axel-Brink hypothesis the nuclear level density and the radiative strength function (RSF) in 56,57 Fe are extracted simultaneously. A step structure is observed in the level density for both isotopes, and is interpreted as the breaking of Cooper pairs. The RSFs for 56,57 Fe reveal an anomalous enhancement at low γ-ray energies

  9. Evaluation of transmitted spectra of megavoltage X rays through concrete using Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Cordeiro, Thaiana de Paula Vieira; Silva, Ademir Xavier da, E-mail: tcordeiro@con.ufrj.b, E-mail: Ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2010-07-01

    With the improvement of technology in radiotherapic centers, medical linear accelerators are largely replacing Cobalt-60 teletherapy units. In most of the cases, the same room that, before, was used to place a {sup 60}Co teletherapy unit is reused to install, in replacement, a linear accelerator. When the room physical space can not be changed, high - density concrete is employed to provide shielding against the primary, scatter and leakage radiation. This work presents a study based on Monte Carlo simulations of transmission of some clinical photon spectra (of 10, 15 and 25 MV accelerators) through concrete of two different densities. Concrete walls of thickness 1.0, 1.5 and 2.0 m were irradiated with 30 cm x 30 cm primary beam spectra. The results show that the thickness of the barrier decreases up to approximately 35%, when barite (high - density concrete) is used instead of ordinary concrete. The average energies of primary and transmitted beam spectra were also calculated. (author)

  10. Evaluation of transmitted spectra of megavoltage X rays through concrete using Monte Carlo simulation

    International Nuclear Information System (INIS)

    Cordeiro, Thaiana de Paula Vieira; Silva, Ademir Xavier da

    2010-01-01

    With the improvement of technology in radiotherapic centers, medical linear accelerators are largely replacing Cobalt-60 teletherapy units. In most of the cases, the same room that, before, was used to place a 60 Co teletherapy unit is reused to install, in replacement, a linear accelerator. When the room physical space can not be changed, high - density concrete is employed to provide shielding against the primary, scatter and leakage radiation. This work presents a study based on Monte Carlo simulations of transmission of some clinical photon spectra (of 10, 15 and 25 MV accelerators) through concrete of two different densities. Concrete walls of thickness 1.0, 1.5 and 2.0 m were irradiated with 30 cm x 30 cm primary beam spectra. The results show that the thickness of the barrier decreases up to approximately 35%, when barite (high - density concrete) is used instead of ordinary concrete. The average energies of primary and transmitted beam spectra were also calculated. (author)

  11. Development of continuous energy Monte Carlo burn-up calculation code MVP-BURN

    International Nuclear Information System (INIS)

    Okumura, Keisuke; Nakagawa, Masayuki; Sasaki, Makoto

    2001-01-01

    Burn-up calculations based on the continuous energy Monte Carlo method became possible by development of MVP-BURN. To confirm the reliably of MVP-BURN, it was applied to the two numerical benchmark problems; cell burn-up calculations for High Conversion LWR lattice and BWR lattice with burnable poison rods. Major burn-up parameters have shown good agreements with the results obtained by a deterministic code (SRAC95). Furthermore, spent fuel composition calculated by MVP-BURN was compared with measured one. Atomic number densities of major actinides at 34 GWd/t could be predicted within 10% accuracy. (author)

  12. Alternative implementations of the Monte Carlo power method

    International Nuclear Information System (INIS)

    Blomquist, R.N.; Gelbard, E.M.

    2002-01-01

    We compare nominal efficiencies, i.e. variances in power shapes for equal running time, of different versions of the Monte Carlo eigenvalue computation, as applied to criticality safety analysis calculations. The two main methods considered here are ''conventional'' Monte Carlo and the superhistory method, and both are used in criticality safety codes. Within each of these major methods, different variants are available for the main steps of the basic Monte Carlo algorithm. Thus, for example, different treatments of the fission process may vary in the extent to which they follow, in analog fashion, the details of real-world fission, or may vary in details of the methods by which they choose next-generation source sites. In general the same options are available in both the superhistory method and conventional Monte Carlo, but there seems not to have been much examination of the special properties of the two major methods and their minor variants. We find, first, that the superhistory method is just as efficient as conventional Monte Carlo and, secondly, that use of different variants of the basic algorithms may, in special cases, have a surprisingly large effect on Monte Carlo computational efficiency

  13. Plasma flow to a surface using the iterative Monte Carlo method

    International Nuclear Information System (INIS)

    Pitcher, C.S.

    1994-01-01

    The iterative Monte Carlo (IMC) method is applied to a number of one-dimensional plasma flow problems, which encompass a wide range of conditions typical of those present in the boundary of magnetic fusion devices. The kinetic IMC method of solving plasma flow to a surface consists of launching and following particles within a grid of 'bins' into which weights are left according to the time a particle spends within a bin. The density and potential distributions within the plasma are iterated until the final solution is arrived at. The IMC results are compared with analytical treatments of these problems and, in general, good agreement is obtained. (author)

  14. Igo - A Monte Carlo Code For Radiotherapy Planning

    International Nuclear Information System (INIS)

    Goldstein, M.; Regev, D.

    1999-01-01

    The goal of radiation therapy is to deliver a lethal dose to the tumor, while minimizing the dose to normal tissues and vital organs. To carry out this task, it is critical to calculate correctly the 3-D dose delivered. Monte Carlo transport methods (especially the Adjoint Monte Carlo have the potential to provide more accurate predictions of the 3-D dose the currently used methods. IG0 is a Monte Carlo code derived from the general Monte Carlo Program - MCNP, tailored specifically for calculating the effects of radiation therapy. This paper describes the IG0 transport code, the PIG0 interface and some preliminary results

  15. Monte Carlo techniques for analyzing deep-penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1986-01-01

    Current methods and difficulties in Monte Carlo deep-penetration calculations are reviewed, including statistical uncertainty and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multigroup Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  16. Odd-flavor Simulations by the Hybrid Monte Carlo

    CERN Document Server

    Takaishi, Tetsuya; Takaishi, Tetsuya; De Forcrand, Philippe

    2001-01-01

    The standard hybrid Monte Carlo algorithm is known to simulate even flavors QCD only. Simulations of odd flavors QCD, however, can be also performed in the framework of the hybrid Monte Carlo algorithm where the inverse of the fermion matrix is approximated by a polynomial. In this exploratory study we perform three flavors QCD simulations. We make a comparison of the hybrid Monte Carlo algorithm and the R-algorithm which also simulates odd flavors systems but has step-size errors. We find that results from our hybrid Monte Carlo algorithm are in agreement with those from the R-algorithm obtained at very small step-size.

  17. Quantum Monte Carlo approaches for correlated systems

    CERN Document Server

    Becca, Federico

    2017-01-01

    Over the past several decades, computational approaches to studying strongly-interacting systems have become increasingly varied and sophisticated. This book provides a comprehensive introduction to state-of-the-art quantum Monte Carlo techniques relevant for applications in correlated systems. Providing a clear overview of variational wave functions, and featuring a detailed presentation of stochastic samplings including Markov chains and Langevin dynamics, which are developed into a discussion of Monte Carlo methods. The variational technique is described, from foundations to a detailed description of its algorithms. Further topics discussed include optimisation techniques, real-time dynamics and projection methods, including Green's function, reptation and auxiliary-field Monte Carlo, from basic definitions to advanced algorithms for efficient codes, and the book concludes with recent developments on the continuum space. Quantum Monte Carlo Approaches for Correlated Systems provides an extensive reference ...

  18. SU-E-J-144: Low Activity Studies of Carbon 11 Activation Via GATE Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Elmekawy, A; Ewell, L [Hampton University, Hampton, VA (United States); Butuceanu, C; Qu, L [Hampton University Proton Therapy Institute, Hampton, VA (United States)

    2015-06-15

    Purpose: To investigate the behavior of a Monte Carlo simulation code with low levels of activity (∼1,000Bq). Such activity levels are expected from phantoms and patients activated via a proton therapy beam. Methods: Three different ranges for a therapeutic proton radiation beam were examined in a Monte Carlo simulation code: 13.5, 17.0 and 21.0cm. For each range, the decay of an equivalent length{sup 11}C source and additional sources of length plus or minus one cm was studied in a benchmark PET simulation for activities of 1000, 2000 and 3000Bq. The ranges were chosen to coincide with a previous activation study, and the activities were chosen to coincide with the approximate level of isotope creation expected in a phantom or patient irradiated by a therapeutic proton beam. The GATE 7.0 simulation was completed on a cluster node, running Scientific Linux Carbon 6 (Red Hat©). The resulting Monte Carlo data were investigated with the ROOT (CERN) analysis tool. The half-life of{sup 11}C was extracted via a histogram fit to the number of simulated PET events vs. time. Results: The average slope of the deviation of the extracted carbon half life from the expected/nominal value vs. activity showed a generally positive value. This was unexpected, as the deviation should, in principal, decrease with increased activity and lower statistical uncertainty. Conclusion: For activity levels on the order of 1,000Bq, the behavior of a benchmark PET test was somewhat unexpected. It is important to be aware of the limitations of low activity PET images, and low activity Monte Carlo simulations. This work was funded in part by the Philips corporation.

  19. Non statistical Monte-Carlo

    International Nuclear Information System (INIS)

    Mercier, B.

    1985-04-01

    We have shown that the transport equation can be solved with particles, like the Monte-Carlo method, but without random numbers. In the Monte-Carlo method, particles are created from the source, and are followed from collision to collision until either they are absorbed or they leave the spatial domain. In our method, particles are created from the original source, with a variable weight taking into account both collision and absorption. These particles are followed until they leave the spatial domain, and we use them to determine a first collision source. Another set of particles is then created from this first collision source, and tracked to determine a second collision source, and so on. This process introduces an approximation which does not exist in the Monte-Carlo method. However, we have analyzed the effect of this approximation, and shown that it can be limited. Our method is deterministic, gives reproducible results. Furthermore, when extra accuracy is needed in some region, it is easier to get more particles to go there. It has the same kind of applications: rather problems where streaming is dominant than collision dominated problems

  20. Diagrammatic Monte Carlo for the weak-coupling expansion of non-Abelian lattice field theories: Large-N U (N ) ×U (N ) principal chiral model

    Science.gov (United States)

    Buividovich, P. V.; Davody, A.

    2017-12-01

    We develop numerical tools for diagrammatic Monte Carlo simulations of non-Abelian lattice field theories in the t'Hooft large-N limit based on the weak-coupling expansion. First, we note that the path integral measure of such theories contributes a bare mass term in the effective action which is proportional to the bare coupling constant. This mass term renders the perturbative expansion infrared-finite and allows us to study it directly in the large-N and infinite-volume limits using the diagrammatic Monte Carlo approach. On the exactly solvable example of a large-N O (N ) sigma model in D =2 dimensions we show that this infrared-finite weak-coupling expansion contains, in addition to powers of bare coupling, also powers of its logarithm, reminiscent of resummed perturbation theory in thermal field theory and resurgent trans-series without exponential terms. We numerically demonstrate the convergence of these double series to the manifestly nonperturbative dynamical mass gap. We then develop a diagrammatic Monte Carlo algorithm for sampling planar diagrams in the large-N matrix field theory, and apply it to study this infrared-finite weak-coupling expansion for large-N U (N ) ×U (N ) nonlinear sigma model (principal chiral model) in D =2 . We sample up to 12 leading orders of the weak-coupling expansion, which is the practical limit set by the increasingly strong sign problem at high orders. Comparing diagrammatic Monte Carlo with conventional Monte Carlo simulations extrapolated to infinite N , we find a good agreement for the energy density as well as for the critical temperature of the "deconfinement" transition. Finally, we comment on the applicability of our approach to planar QCD at zero and finite density.

  1. Track 4: basic nuclear science variance reduction for Monte Carlo criticality simulations. 6. Variational Variance Reduction for Monte Carlo Criticality Calculations

    International Nuclear Information System (INIS)

    Densmore, Jeffery D.; Larsen, Edward W.

    2001-01-01

    Recently, it has been shown that the figure of merit (FOM) of Monte Carlo source-detector problems can be enhanced by using a variational rather than a direct functional to estimate the detector response. The direct functional, which is traditionally employed in Monte Carlo simulations, requires an estimate of the solution of the forward problem within the detector region. The variational functional is theoretically more accurate than the direct functional, but it requires estimates of the solutions of the forward and adjoint source-detector problems over the entire phase-space of the problem. In recent work, we have performed Monte Carlo simulations using the variational functional by (a) approximating the adjoint solution deterministically and representing this solution as a function in phase-space and (b) estimating the forward solution using Monte Carlo. We have called this general procedure variational variance reduction (VVR). The VVR method is more computationally expensive per history than traditional Monte Carlo because extra information must be tallied and processed. However, the variational functional yields a more accurate estimate of the detector response. Our simulations have shown that the VVR reduction in variance usually outweighs the increase in cost, resulting in an increased FOM. In recent work on source-detector problems, we have calculated the adjoint solution deterministically and represented this solution as a linear-in-angle, histogram-in-space function. This procedure has several advantages over previous implementations: (a) it requires much less adjoint information to be stored and (b) it is highly efficient for diffusive problems, due to the accurate linear-in-angle representation of the adjoint solution. (Traditional variance-reduction methods perform poorly for diffusive problems.) Here, we extend this VVR method to Monte Carlo criticality calculations, which are often diffusive and difficult for traditional variance-reduction methods

  2. Radiometric determinations of linear mass, resin levels and density of composite materials

    International Nuclear Information System (INIS)

    Boutaine, J.L.; Pintena, J.; Tanguy, J.C.

    1978-01-01

    A description is given of the principle, characteristics and performances of a gamma back-scattering gauge developed in cooperation between the CEA and SNPE. This instrument allows for on-line inspection of the linear mass and resin level of strips of composite materials whilst being produced. The industrial application involved boron, carbon and 'Kevlar' fibres. The performance of beta and gamma transmission gauges are also given for inspecting the density of panels and dense composite materials [fr

  3. Monte carlo simulation of penetration range distribution of ion beam with low energy implanted in plant seeds

    International Nuclear Information System (INIS)

    Huang Xuchu; Hou Juan; Liu Xiaoyong

    2009-01-01

    The depth and density distribution of V + ion beam implanted into peanut seed is simulated by the Monte Carlo method. The action of ions implanted in plant seeds is studied by the classical collision theory of two objects, the electronic energy loss is calculated by Lindhard-Scharff formulation. The result indicates that the depth of 200keV V + implanted into peanut seed is 5.57μm, which agrees with experimental results, and the model is appropriate to describe this interaction. This paper provides a computational method for the depth and density distribution of ions with low energy implanted in plant seeds. (authors)

  4. Data Analysis Recipes: Using Markov Chain Monte Carlo

    Science.gov (United States)

    Hogg, David W.; Foreman-Mackey, Daniel

    2018-05-01

    Markov Chain Monte Carlo (MCMC) methods for sampling probability density functions (combined with abundant computational resources) have transformed the sciences, especially in performing probabilistic inferences, or fitting models to data. In this primarily pedagogical contribution, we give a brief overview of the most basic MCMC method and some practical advice for the use of MCMC in real inference problems. We give advice on method choice, tuning for performance, methods for initialization, tests of convergence, troubleshooting, and use of the chain output to produce or report parameter estimates with associated uncertainties. We argue that autocorrelation time is the most important test for convergence, as it directly connects to the uncertainty on the sampling estimate of any quantity of interest. We emphasize that sampling is a method for doing integrals; this guides our thinking about how MCMC output is best used. .

  5. Effect of magnetic and density fluctuations on the propagation of lower hybrid waves in tokamaks

    Science.gov (United States)

    Vahala, George; Vahala, Linda; Bonoli, Paul T.

    1992-12-01

    Lower hybrid waves have been used extensively for plasma heating, current drive, and ramp-up as well as sawteeth stabilization. The wave kinetic equation for lower hybrid wave propagation is extended to include the effects of both magnetic and density fluctuations. This integral equation is then solved by Monte Carlo procedures for a toroidal plasma. It is shown that even for magnetic/density fluctuation levels on the order of 10-4, there are significant magnetic fluctuation effects on the wave power deposition into the plasma. This effect is quite pronounced if the magnetic fluctuation spectrum is peaked within the plasma. For Alcator-C-Mod [I. H. Hutchinson and the Alcator Group, Proceedings of the IEEE 13th Symposium on Fusion Engineering (IEEE, New York, 1990), Cat. No. 89CH 2820-9, p. 13] parameters, it seems possible to be able to infer information on internal magnetic fluctuations from hard x-ray data—especially since the effects of fluctuations on electron power density can explain the hard x-ray data from the JT-60 tokamak [H. Kishimoto and JT-60 Team, in Plasma Physics and Controlled Fusion (International Atomic Energy Agency, Vienna, 1989), Vol. I, p. 67].

  6. Linear and Non-Linear Dielectric Response of Periodic Systems from Quantum Monte Carlo

    Science.gov (United States)

    Umari, Paolo

    2006-03-01

    We present a novel approach that allows to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wavefunction, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence. The polarization is sampled through forward-walking. This approach has been validated for the case of the polarizability of an isolated hydrogen atom, and then applied to a periodic system. We then calculate the linear susceptibility and second-order hyper-susceptibility of molecular-hydrogen chains whith different bond-length alternations, and assess the quality of nodal surfaces derived from density-functional theory or from Hartree-Fock. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.P. Umari, A.J. Williamson, G. Galli, and N. MarzariPhys. Rev. Lett. 95, 207602 (2005).

  7. On the contact values of the density profiles in an electric double layer using density functional theory

    Directory of Open Access Journals (Sweden)

    L.B. Bhuiyan

    2012-06-01

    Full Text Available A recently proposed, local second contact value theorem [Henderson D., Boda D., J. Electroanal. Chem., 2005, Vol. 582, 16] for the charge profile of an electric double layer is used in conjunction with existing Monte Carlo data from the literature to assess the contact behavior of the electrode-ion distributions predicted by the density functional theory. The results for the contact values of the co- and counterion distributions and their product are obtained for the symmetric valency, restricted primitive model planar double layer for a range of electrolyte concentrations and temperatures. Overall the theoretical results satisfy the second contact value theorem reasonably well the agreement with the simulations being semi-quantitative or better. The product of the co- and counterion contact values as a function of the electrode surface charge density is qualitative with the simulations with increasing deviations at higher concentrations.

  8. Dose measurement using radiochromic lms and Monte Carlo simulation for hadron-therapy

    International Nuclear Information System (INIS)

    Zahra, N.

    2010-06-01

    Because of the increase in dose at the end of the range of ions, dose delivery during patient treatment with hadron-therapy should be controlled with high precision. Monte Carlo codes are now considered mandatory for validation of clinical treatment planning and as a new tool for dosimetry of ion beams. In this work, we aimed to calculate the absorbed dose using Monte Carlo simulation Geant4/Gate. The effect on the dose calculation accuracy of different Geant4 parameters has been studied for mono-energetic carbon ion beams of 300 MeV/u in water. The parameters are: the production threshold of secondary particles and the maximum step limiter of the particle track. Tolerated criterion were chosen to meet the precision required in radiotherapy in term of value and dose localisation (2%, 2 mm respectively) and to obtain the best compromise on dose distribution and computational time. We propose here the values of parameters in order to satisfy the precision required. In the second part of this work, we study the response of radiochromic films MD-v2-55 for quality control in proton and carbon ion beams. We have particularly observed and studied the quenching effect of dosimetric films for high LET (≥20 keV/μm) irradiation in homogeneous and heterogeneous media. This effect is due to the high ionization density around the track of the particle. We have developed a method to predict the response of radiochromic films taking into account the saturation effect. This model is called the RADIS model for 'Radiochromic films Dosimetry for Ions using Simulations'. It is based on the response of films under photon irradiations and the saturation of films due to high linear energy deposit calculated by Monte Carlo. Different beams were used in this study and aimed to validate the model for hadron-therapy applications: carbon ions, protons and photons at different energies. Experiments were performed at Grand Accelerateur National d'Ions Lourds (GANIL), Proton therapy center of

  9. The Hagedorn spectrum, nuclear level densities and first order phase transitions

    Energy Technology Data Exchange (ETDEWEB)

    Moretto, Luciano G., E-mail: lgmoretto@lbl.gov [Department of Chemistry, University of California, Berkeley, Lawrence Berkeley National Laboratory 1 Cyclotron Road, Berkeley, CA 94720 (United States); Larsen, A. C.; Guttormsen, M.; Siem, S. [Department of Physics, University of Oslo, N-0316 Oslo (Norway)

    2015-10-15

    An exponential mass spectrum, like the Hagedorn spectrum, with slope 1/T{sub H} was interpreted as fixing an upper limiting temperature T{sub H} that the system can achieve. However, thermodynamically, such spectrum indicates a 1{sup st} order phase transition at a fixed temperature T{sub H}. A much lower energy example is the log linear level nuclear density below the neutron binding energy that prevails throughout the nuclear chart. We show that, for non-magic nuclei, such linearity implies a 1{sup st} order phase transition from the pairing superfluid to an ideal gas of quasi particles.

  10. Lateral electron transport in monolayers of short chains at interfaces: A Monte Carlo study

    International Nuclear Information System (INIS)

    George, Christopher B.; Szleifer, Igal; Ratner, Mark A.

    2010-01-01

    Graphical abstract: Electron hopping between electroactive sites in a monolayer composed of redox-active and redox-passive molecules. - Abstract: Using Monte Carlo simulations, we study lateral electronic diffusion in dense monolayers composed of a mixture of redox-active and redox-passive chains tethered to a surface. Two charge transport mechanisms are considered: the physical diffusion of electroactive chains and electron hopping between redox-active sites. Results indicate that by varying the monolayer density, the mole fraction of electroactive chains, and the electron hopping range, the dominant charge transport mechanism can be changed. For high density monolayers in a semi-crystalline phase, electron diffusion proceeds via electron hopping almost exclusively, leading to static percolation behavior. In fluid monolayers, the diffusion of chains may contribute more to the overall electronic diffusion, reducing the observed static percolation effects.

  11. A note on simultaneous Monte Carlo tests

    DEFF Research Database (Denmark)

    Hahn, Ute

    In this short note, Monte Carlo tests of goodness of fit for data of the form X(t), t ∈ I are considered, that reject the null hypothesis if X(t) leaves an acceptance region bounded by an upper and lower curve for some t in I. A construction of the acceptance region is proposed that complies to a...... to a given target level of rejection, and yields exact p-values. The construction is based on pointwise quantiles, estimated from simulated realizations of X(t) under the null hypothesis....

  12. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems

    KAUST Repository

    Efendiev, Yalchin R.; Jin, Bangti; Michael, Presho; Tan, Xiaosi

    2014-01-01

    Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online

  13. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications

  14. Monte Carlo techniques for analyzing deep penetration problems

    International Nuclear Information System (INIS)

    Cramer, S.N.; Gonnord, J.; Hendricks, J.S.

    1985-01-01

    A review of current methods and difficulties in Monte Carlo deep-penetration calculations is presented. Statistical uncertainty is discussed, and recent adjoint optimization of splitting, Russian roulette, and exponential transformation biasing is reviewed. Other aspects of the random walk and estimation processes are covered, including the relatively new DXANG angular biasing technique. Specific items summarized are albedo scattering, Monte Carlo coupling techniques with discrete ordinates and other methods, adjoint solutions, and multi-group Monte Carlo. The topic of code-generated biasing parameters is presented, including the creation of adjoint importance functions from forward calculations. Finally, current and future work in the area of computer learning and artificial intelligence is discussed in connection with Monte Carlo applications. 29 refs

  15. Decrease in plasma high-density lipoprotein cholesterol levels at puberty in boys with delayed adolescence: correlation with plasma testosterone levels

    International Nuclear Information System (INIS)

    Kirkland, R.T.; Keenan, B.S.; Probstfield, J.L.; Patsch, W.; Lin, T.L.; Clayton, G.W.; Insull, W. Jr.

    1987-01-01

    A three-phase study tested the hypothesis that the decrease in the high-density lipoprotein cholesterol (HDL-C) level observed in boys at puberty is related to an increase in the plasma testosterone concentration. In phase I, 57 boys aged 10 to 17 years were categorized into four pubertal stages based on clinical parameters and plasma testosterone levels. These four groups showed increasing plasma testosterone values and decreasing HDL-C levels. In phase II, 14 boys with delayed adolescence were treated with testosterone enanthate. Plasma testosterone levels during therapy were in the adult male range. Levels of HDL-C decreased by a mean of 7.4 mg/dL (0.20 mmol/L) and 13.7 mg/dL (0.35 mmol/L), respectively, after the first two doses. In phase III, 13 boys with delayed adolescence demonstrated increasing plasma testosterone levels and decreasing HDL-C levels during spontaneous puberty. Levels of HDL-C and apolipoprotein A-1 were correlated during induced and spontaneous puberty. Testosterone should be considered a significant determinant of plasma HDL-C levels during pubertal development

  16. Numerical simulation of responses for cased-hole density logging

    International Nuclear Information System (INIS)

    Wu, Wensheng; Fu, Yaping; Niu, Wei

    2013-01-01

    Stabilizing or stimulating oil production in old oil fields requires density logging in cased holes where open-hole logging data are either missing or of bad quality. However, measured values from cased-hole density logging are more severely influenced by factors such as fluid, casing, cement sheath and the outer diameter of the open-hole well compared with those from open-hole logging. To correctly apply the cased-hole formation density logging data, one must eliminate these influences on the measured values and study the characteristics of how the cased-hole density logging instrument responds to these factors. In this paper, a Monte Carlo numerical simulation technique was used to calculate the responses of the far detector of a cased-hole density logging instrument to in-hole fluid, casing wall thickness, cement sheath density and the formation and thus to obtain influence rules and response coefficients. The obtained response of the detector is a function of in-hole liquid, casing wall thickness, the casing's outer diameter, cement sheath density, open-hole well diameter and formation density. The ratio of the counting rate of the detector in the calibration well to that in the measurement well was used to get a fairly simple detector response equation and the coefficients in the equation are easy to acquire. These provide a new way of calculating cased-hole density through forward modelling methods. (paper)

  17. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-01-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation

  18. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-02-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

  19. Monte Carlo closure for moment-based transport schemes in general relativistic radiation hydrodynamic simulations

    Science.gov (United States)

    Foucart, Francois

    2018-04-01

    General relativistic radiation hydrodynamic simulations are necessary to accurately model a number of astrophysical systems involving black holes and neutron stars. Photon transport plays a crucial role in radiatively dominated accretion discs, while neutrino transport is critical to core-collapse supernovae and to the modelling of electromagnetic transients and nucleosynthesis in neutron star mergers. However, evolving the full Boltzmann equations of radiative transport is extremely expensive. Here, we describe the implementation in the general relativistic SPEC code of a cheaper radiation hydrodynamic method that theoretically converges to a solution of Boltzmann's equation in the limit of infinite numerical resources. The algorithm is based on a grey two-moment scheme, in which we evolve the energy density and momentum density of the radiation. Two-moment schemes require a closure that fills in missing information about the energy spectrum and higher order moments of the radiation. Instead of the approximate analytical closure currently used in core-collapse and merger simulations, we complement the two-moment scheme with a low-accuracy Monte Carlo evolution. The Monte Carlo results can provide any or all of the missing information in the evolution of the moments, as desired by the user. As a first test of our methods, we study a set of idealized problems demonstrating that our algorithm performs significantly better than existing analytical closures. We also discuss the current limitations of our method, in particular open questions regarding the stability of the fully coupled scheme.

  20. INTERACTIVE EFFECT OF CAGE DENSITY AND DIETARY BLACK CUMIN LEVEL ON PRODUCTIVE EFFICIENCY IN BROILER CHICKENS

    Directory of Open Access Journals (Sweden)

    L. D. Mahfudz

    2015-09-01

    Full Text Available The present research was aimed to evaluate an interactive effect of cage density and level ofdietary black cumin (BC on productive efficiency of broiler chickens. A total of 270 broiler chickens(initial body weight of 163.12 ± 8.10g were allocated into a completely randomized design with a 3 x 3factorial pattern. The first factor was the cage density (bird/m2 namely, D1 = 8; D2 = 10, and D3 = 12.The second factor was BC level (%, namely, B1 = 1; B2 = 2, and B3 = 3. Feed consumption, bodyweight gain (BWG, feed conversion ratio (FCR, protein digestibility, and income over feed cost(IOFC were the parameters measured. Data were subjected to ANOVA and continued to Duncan test.No interaction between cage density and black cumin on all parameters was observed. Feedconsumption and FCR were increased, but BWG was lowered significantly (P<0.05 due to the cagedensities of 10 and 12 birds/m2 on weeks 2 and 3. Protein digestibility was significantly increased byfeeding 2 and 3% BC. IOFC decreased significantly (P<0.05 when cage densities were 10 and 12birds/m2. In conclusion, the improvement of productive efficiency of broiler chicken reared at the cagedensity of 12 birds /m2 can be sufficiently achieved by feeding 1% black cumin.